Search Results

Search found 8267 results on 331 pages for 'insert into'.

Page 118/331 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • DELL Inspiron 9400 SD Card Reader not working on 10.11 ubuntu

    - by Mario Martz
    Im new in linux, just installed Ubuntu 11.10 in a Dell Inspiron 9400, everything works fine with the exception of the SD card reader, everytime I insert a card the computer doesnt do anything, its like the SD card reader is not there. I did a $lspci and it shows the next drivers 03:01.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller 03:01.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 19) 03:01.2 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 0a) 03:01.3 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 05) Everytime I insert a memory card, dmesg shows the next d status 0x600b00 [ 2687.227351] end_request: I/O error, dev mmcblk0, sector 64 [ 2687.229436] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.229440] end_request: I/O error, dev mmcblk0, sector 65 [ 2687.230512] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.230515] end_request: I/O error, dev mmcblk0, sector 66 [ 2687.231588] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.231592] end_request: I/O error, dev mmcblk0, sector 67 [ 2687.232674] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.232678] end_request: I/O error, dev mmcblk0, sector 68 [ 2687.234763] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.234766] end_request: I/O error, dev mmcblk0, sector 69 [ 2687.236864] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.236868] end_request: I/O error, dev mmcblk0, sector 70 [ 2687.238942] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.238946] end_request: I/O error, dev mmcblk0, sector 71 [ 2687.238949] Buffer I/O error on device mmcblk0, logical block 8 [ 2687.241028] mmcblk0: retrying using single block read [ 2687.243104] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.243108] end_request: I/O error, dev mmcblk0, sector 64 [ 2687.245212] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.245215] end_request: I/O error, dev mmcblk0, sector 65 [ 2687.247298] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.247302] end_request: I/O error, dev mmcblk0, sector 66 [ 2687.248389] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.248393] end_request: I/O error, dev mmcblk0, sector 67 [ 2687.250476] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.250480] end_request: I/O error, dev mmcblk0, sector 68 [ 2687.252617] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.252621] end_request: I/O error, dev mmcblk0, sector 69 [ 2687.254737] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 and more of the same but with different sector number Im using Kernel 3.0.0-12-generic By the way, when I was installing it and ubuntu asks about installation (If I want to install it along with windows or delete something or change the partitions of the HDD) if I go to the window to change the partition of the disc, linux detects the SD Card (if there's one inserted course). Any help with this it would be appreciated -sorry for my english Thank you

    Read the article

  • Extending WikiPlex with Scope Augmenters

    - by mhawley
    [In addition to blogging, I am also using Twitter. Follow me: @matthawley] Another extension point with WikiPlex is Scope Augmenters. Scope Augmenters allow you to post process the collection of scopes to further augment, or insert/remove, new scopes prior to being rendered. WikiPlex comes with 3 out-of-the-box Scope Augmenters that it uses for indentation, tables, and lists. For reference, I'll be explaining… (read more)

    Read the article

  • Is it possible to connect Flash applications to a server build with Delphi?

    - by Japie Bosman
    I have developed a client/server application using Firebird and developed with Delphi, having some clients that is also in Delphi, which I know how to do. But what I want to know is if it is possible to connect some Flash applications as clients to the Delphi developed server? The Flash applications must be able to access data in the database and also be able to update and insert records. Can something like this be done? Im using Delphi XE2 and the Flash is written with Actionscript 3

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to Assign a Default Signature in Outlook 2013

    - by Lori Kaufman
    If you sign most of your emails the same way, you can easily specify a default signature to automatically insert into new email messages and replies and forwards. This can be done directly in the Signature editor in Outlook 2013. We recently showed you how to create a new signature. You can also create multiple signatures for each email account and define a different default signature for each account. When you change your sending account when composing a new email message, the signature would change automatically as well. NOTE: To have a signature added automatically to new email messages and replies and forwards, you must have a default signature assigned in each email account. If you don’t want a signature in every account, you can create a signature with just a space, a full stop, dashes, or other generic characters. To assign a default signature, open Outlook and click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. To change the default signature for an email account, select the account from the E-mail account drop-down list on the top, right side of the dialog box under Choose default signature. Then, select the signature you want to use by default for New messages and for Replies/forwards from the other two drop-down lists. Click OK to accept your changes and close the dialog box. Click OK on the Outlook Options dialog box to close it. You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. Click Signature in the Include section of the New Mail Message window and select Signatures from the drop-down menu. In the next few days, we will be covering how to use the features of the signature editor next, and then how to insert and change signatures manually, backup and restore your signatures, and modify a signature for use in plain text emails.     

    Read the article

  • Full-text Indexing Books Online

    - by Most Valuable Yak (Rob Volk)
    While preparing for a recent SQL Saturday presentation, I was struck by a crazy idea (shocking, I know): Could someone import the content of SQL Server Books Online into a database and apply full-text indexing to it?  The answer is yes, and it's really quite easy to do. The first step is finding the installed help files.  If you have SQL Server 2012, BOL is installed under the Microsoft Help Library.  You can find the install location by opening SQL Server Books Online and clicking the gear icon for the Help Library Manager.  When the new window pops up click the Settings link, you'll get the following: You'll see the path under Library Location. Once you navigate to that path you'll have to drill down a little further, to C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store.  This is where the help file content is kept if you downloaded it for offline use. Depending on which products you've downloaded help for, you may see a few hundred files.  Fortunately they're named well and you can easily find the "SQL_Server_Denali_Books_Online_" files.  We are interested in the .MSHC files only, and can skip the Installation and Developer Reference files. Despite the .MHSC extension, these files are compressed with the standard Zip format, so your favorite archive utility (WinZip, 7Zip, WinRar, etc.) can open them.  When you do, you'll see a few thousand files in the archive.  We are only interested in the .htm files, but there's no harm in extracting all of them to a folder.  7zip provides a command-line utility and the following will extract to a D:\SQLHelp folder previously created: 7z e –oD:\SQLHelp "C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store\SQL_Server_Denali_Books_Online_B780_SQL_110_en-us_1.2.mshc" *.htm Well that's great Rob, but how do I put all those files into a full-text index? I'll tell you in a second, but first we have to set up a few things on the database side.  I'll be using a database named Explore (you can certainly change that) and the following setup is a fragment of the script I used in my presentation: USE Explore; GO CREATE SCHEMA help AUTHORIZATION dbo; GO -- Create default fulltext catalog for later FT indexes CREATE FULLTEXT CATALOG FTC AS DEFAULT; GO CREATE TABLE help.files(file_id int not null IDENTITY(1,1) CONSTRAINT PK_help_files PRIMARY KEY, path varchar(256) not null CONSTRAINT UNQ_help_files_path UNIQUE, doc_type varchar(6) DEFAULT('.xml'), content varbinary(max) not null); CREATE FULLTEXT INDEX ON help.files(content TYPE COLUMN doc_type LANGUAGE 1033) KEY INDEX PK_help_files; This will give you a table, default full-text catalog, and full-text index on that table for the content you're going to insert.  I'll be using the command line again for this, it's the easiest method I know: for %a in (D:\SQLHelp\*.htm) do sqlcmd -S. -E -d Explore -Q"set nocount on;insert help.files(path,content) select '%a', cast(c as varbinary(max)) from openrowset(bulk '%a', SINGLE_CLOB) as c(c)" You'll need to copy and run that as one line in a command prompt.  I'll explain what this does while you run it and watch several thousand files get imported: The "for" command allows you to loop over a collection of items.  In this case we want all the .htm files in the D:\SQLHelp folder.  For each file it finds, it will assign the full path and file name to the %a variable.  In the "do" clause, we'll specify another command to be run for each iteration of the loop.  I make a call to "sqlcmd" in order to run a SQL statement.  I pass in the name of the server (-S.), where "." represents the local default instance. I specify -d Explore as the database, and -E for trusted connection.  I then use -Q to run a query that I enclose in double quotes. The query uses OPENROWSET(BULK…SINGLE_CLOB) to open the file as a data source, and to treat it as a single character large object.  In order for full-text indexing to work properly, I have to convert the text content to varbinary. I then INSERT these contents along with the full path of the file into the help.files table created earlier.  This process continues for each file in the folder, creating one new row in the table. And that's it! 5 SQL Statements and 2 command line statements to unzip and import SQL Server Books Online!  In case you're wondering why I didn't use FILESTREAM or FILETABLE, it's simply because I haven't learned them…yet. I may return to this blog after I figure that out and update it with the steps to do so.  I believe that will make it even easier. In the spirit of exploration, I'll leave you to work on some fulltext queries of this content.  I also recommend playing around with the sys.dm_fts_xxxx DMVs (I particularly like sys.dm_fts_index_keywords, it's pretty interesting).  There are additional example queries in the download material for my presentation linked above. Many thanks to Kevin Boles (t) for his advice on (re)checking the content of the help files.  Don't let that .htm extension fool you! The 2012 help files are actually XML, and you'd need to specify '.xml' in your document type column in order to extract the full-text keywords.  (You probably noticed this in the default definition for the doc_type column.)  You can query sys.fulltext_document_types to get a complete list of the types that can be full-text indexed. I also need to thank Hilary Cotter for giving me the original idea. I believe he used MSDN content in a full-text index for an article from waaaaaaaaaaay back, that I can't find now, and had forgotten about until just a few days ago.  He is also co-author of Pro Full-Text Search in SQL Server 2008, which I highly recommend.  He also has some FTS articles on Simple Talk: http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features/ http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features,-part-2/

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • SQL MDS - Updating the Name attribute of member using Staging Table

    - by Randy Aldrich Paulo
    Creating member is usually done by populating the Member Staging Table (tblStgMember), during this process you assign a value for member code and member name. Now if you want to update the member name attribute you can do this by adding record in Attribute staging table (tblStgMemberAttribute) with Attribute Name = "Name". If you try populating the tblStgMember table it will say that the member code already exists.   INSERT INTO mdm.tblStgMemberAttribute (ModelName, EntityName, MemberType_ID, MemberCode, AttributeName, AttributeValue) VALUES (N'Product', N'Product', 1, N'BK-M101', N'Name',N'Updated Member Name Description')

    Read the article

  • SQL SERVER – Reseting Identity Values for All Tables

    - by pinaldave
    Sometime email requesting help generates more questions than the motivation to answer them. Let us go over one of the such examples. I have converted the complete email conversation to chat format for easy consumption. I almost got a headache after around 20 email exchange. I am sure if you can read it and feel my pain. DBA: “I deleted all of the data from my database and now it contains table structure only. However, when I tried to insert new data in my tables I noticed that my identity values starts from the same number where they actually were before I deleted the data.” Pinal: “How did you delete the data?” DBA: “Running Delete in Loop?” Pinal: “What was the need of such need?” DBA: “It was my development server and I needed to repopulate the database.” Pinal: “Oh so why did not you use TRUNCATE which would have reset the identity of your table to the original value when the data got deleted? This will work only if you want your database to reset to the original value. If you want to set any other value this may not work.” DBA: (silence for 2 days) DBA: “I did not realize it. Meanwhile I regenerated every table’s schema and dropped the table and re-created it.” Pinal: “Oh no, that would be extremely long and incorrect way. Very bad solution.” DBA: “I understand, should I just take backup of the database before I insert the data and when I need, I can use the original backup to restore the database. This way I will have identity beginning with 1.” Pinal: “This going totally downhill. It is wrong to do so on multiple levels. Did you even read my earlier email about TRUNCATE.” DBA: “Yeah. I found it in spam folder.” Pinal: (I decided to stay silent) DBA: (After 2 days) “Can you provide me script to reseed identity for all of my tables to value 1 without asking further question.” Pinal: USE DATABASE; EXEC sp_MSForEachTable ' IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1 DBCC CHECKIDENT (''?'', RESEED, 1)' GO Our conversation ended here. If you have directly jumped to this statement, I encourage you to read the conversation one time. There is difference between reseeding identity value to 1 and reseeding it to original value – I will write an another blog post on this subject in future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Easy Profiling Point Insertion

    - by Geertjan
    One really excellent feature of NetBeans IDE is its Profiler. What's especially cool is that you can analyze code fragments, that is, you can right-click in a Java file and then choose Profiling | Insert Profiling Point. When you do that, you're able to analyze code fragments, i.e., from one statement to another statement, e.g., how long a particular piece of code takes to execute: https://netbeans.org/kb/docs/java/profiler-profilingpoints.html However, right-clicking a Java file and then going all the way down a longish list of menu items, to find "Profiling", and then "Insert Profiling Point" is a lot less easy than right-clicking in the sidebar (known as the glyphgutter) and then setting a profiling point in exactly the same way as a breakpoint: That's much easier and more intuitive and makes it far more likely that I'll use the Profiler at all. Once profiling points have been set then, as always, another menu item is added for managing the profiling point: To achieve this, I added the following to the "layer.xml" file: <folder name="Editors"> <folder name="AnnotationTypes"> <file name="profiler.xml" url="profiler.xml"/> <folder name="ProfilerActions"> <file name="org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.shadow"> <attr name="originalFile" stringvalue="Actions/Profile/org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.instance"/> <attr name="position" intvalue="300"/> </file> </folder> </folder> </folder> Notice that a "profiler.xml" file is referred to in the above, in the same location as where the "layer.xml" file is found. Here is the content: <!DOCTYPE type PUBLIC '-//NetBeans//DTD annotation type 1.1//EN' 'http://www.netbeans.org/dtds/annotation-type-1_1.dtd'> <type name='editor-profiler' description_key='HINT_PROFILER' localizing_bundle='org.netbeans.eppi.Bundle' visible='true' type='line' actions='ProfilerActions' severity='ok' browseable='false'/> Only disadvantage is that this registers the profiling point insertion in the glyphgutter for all file types. But that's true for the debugger too, i.e., there's no MIME type specific glyphgutter, instead, it is shared by all MIME types. Little bit confusing that the profiler point insertion can now, in theory, be set for all MIME types, but that's also true for the debugger, even though it doesn't apply to all MIME types. That probably explains why the profiling point insertion can only be done, officially, from the right-click popup menu of Java files, i.e., the developers wanted to avoid confusion and make it available to Java files only. However, I think that, since I'm already aware that I can't set the Java debugger in an HTML file, I'm also aware that the Java profiler can't be set that way as well. If you find this useful too, you can download and install the NBM from here: http://plugins.netbeans.org/plugin/55002

    Read the article

  • Apple MBP 7,1 Color profile xcalib

    - by cloudlight
    I have installed Kubuntu 64bit on my MBP 7,1. After installation I was going through following documentation (screen) https://help.ubuntu.com/community/MacBookPro7-1/Maverick#Screen I am confused about line /usr/bin/xcalib "/etc/xcalib/<insert name of profile here>" which profile to put at that place? I am getting following output on command prompt bharat@bharat-MacBookPro:~$ ls /etc/xcalib Color LCD-00000610-0000-9CC5-0000-000004273140.icc EPSON PJ -00004CA3-0000-A600-0000-000028E98001.icc SyncMaster-00004C2D-0000-0117-4C45-31370B4074F5.icc

    Read the article

  • Creating an ASP.NET Dynamic Web Page Using MS SQL Server 2008 Database (GridView Display)

    Dynamic pages pages that pull insert update and delete data or content from a database are extremely useful in modern websites. They provide a high level of user interactivity that improves user experience. This article will show you how to create such pages in ASP.NET that use a Microsoft SQL Server 2 8 database.... GoGrid Cloud Center Connect Cloud and Dedicated Servers on Your Private Data Center

    Read the article

  • Is Tracking Software Usage Illegal?

    - by Graviton
    Let's say if I am doing desktop application, and I am interested to know whether our software really gets used or not. Is it alright to insert in code that tracks whether our software is used, for how long and so on? Note that no person-identifiable information will be collected, all I am interested to know is how frequent and for how long the software is used. The information will be sent to our server for diagnosis. What do you think?

    Read the article

  • SQL SERVER Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index

    This blog is in response to the ongoing communication with the reader who had earlier asked the question of SQL SERVER Disable Clustered Index and Data Insert. The same reader has asked me the difference between ALTER INDEX ALL REBUILD and ALTER INDEX REBUILD along with disabled clustered index. Instead of writing a big [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • xubuntu 13.10: automount usb sticks

    - by netimen
    I have a freshly installed Xubuntu 13.10 on the Lenovo T520 laptop. In the Settings Manager — Removable Drives and Media I have the Mount removable drives when hot-plugged and Mount removable media when inserted checked. But when I insert an usb-stick (VFAT) it doesn't get auto-mounted. So I can't access it from terminal. It gets mounted only when I click on the drive icon on the desktop or in Thunar. Can I fix it somehow?

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • What is Advanced Search Operators?

    Search engines have set up further tools referred to as advanced search operators to provide power users possibly far more control when searching. Advanced search operators are distinctive phrases which you could insert in your search query for you to come across unique sorts of details of which the common search are not able to offer. A number of of those operators provide beneficial resources for Search engines gurus and other people who want really special data, or perhaps who wish to minimize their particular search to very specific source.

    Read the article

  • Searching for a key in a multi dimensional array and adding it to another array [migrated]

    - by Moha
    Let's say I have two multi dimensional arrays: array1 ( stuff1 = array ( data = 'abc' ) stuff2 = array ( something = '123' data = 'def' ) stuff3 = array ( stuff4 = array ( data = 'ghi' ) ) ) array2 ( stuff1 = array ( ) stuff3 = array ( anything = '456' ) ) What I want is to search the key 'data' in array1 and then insert the key and value to array2 regardless of the depth. So wherever key 'data' exists in array1 it gets added to array2 with the exact depth (and key names) as in array1 AND without modifying any other keys. How can I do this recursively?

    Read the article

  • Alternatives for 'egrep -o "success|error|fail" <filename> | sort | uniq -c'

    - by Wolfy
    I sometime need to check some logs and I do this with this command: egrep -o "success|error|fail" <filename> | sort | uniq -c Sample input: test error on line 10 test connect success test insert success test started at 00:00 test delete fail Sample output: 1 error 1 fail 2 success I would like to know if someone knows a way to do this with a shorter command? Before you ask why I would like to do this with an different command... No special reason, I'm just curious :)

    Read the article

  • How to calculate the covariance in T-SQL

    - by Peter Larsson
    DECLARE @Sample TABLE         (             x INT NOT NULL,             y INT NOT NULL         ) INSERT  @Sample VALUES  (3, 9),         (2, 7),         (4, 12),         (5, 15),         (6, 17) ;WITH cteSource(x, xAvg, y, yAvg, n) AS (         SELECT  1E * x,                 AVG(1E * x) OVER (PARTITION BY (SELECT NULL)),                 1E * y,                 AVG(1E * y) OVER (PARTITION BY (SELECT NULL)),                 COUNT(*) OVER (PARTITION BY (SELECT NULL))         FROM    @Sample ) SELECT  SUM((x - xAvg) *(y - yAvg)) / MAX(n) AS [COVAR(x,y)] FROM    cteSource

    Read the article

  • How can I Fix the CD/DVD refresh for the media drive so that it is recognized automatically?

    - by Denja
    Hello Community, I have recently installed Ubuntu 10.10 and it seems to work flawlessly. But today I tried to check some CDs and I was surprised that Ubuntu doesn't automatically refresh the CD/DVDs inserted in my media drive. I had to eject manually the CD/DVD from Ubuntu and then Insert the new CD and only then Ubuntu reads the data in the newly inserted CD/DVD. How can I fix the CD/DVD refresh for the media drive so that Ubuntu 10.10 recognize it automatically?

    Read the article

  • Making Use of Advanced Search Operators

    Search engines have set up extra tools referred to as advanced search operators to give professional users additionally more manage when searching. Advanced search operators are unique words that you simply can insert inside your search item in order to find unique sorts of details which a common search can not supply. Numerous of those operators produce handy tools for SEO professionals as well as other people who want really special details, or perhaps who prefer to control their search to very specific results.

    Read the article

  • Equation solver project

    - by Victor Barbu
    I would like to start a project destinated to students. My application has to solve any kind of equation, passed by the user as a string, exactly like in Matlab solve function. How shluld I do this? What programming language is the best for this purpose? Thanks in advance. P.S This is a screenshot made in Matlab. This is how I would like the user to insert and receive the answer: Another example:

    Read the article

  • Does the Adblock Plus extension prevent malicious code from downloading/executing? [closed]

    - by nctrnl
    Firefox and Chrome are my favourite browsers. The main reason is an extension called Adblock Plus. Basically, it blocks all the ad networks if you subscribe to one of the lists, like EasyList. Does it also protect against malicious ads on completely legitimate websites? For instance, several news websites use ad services that may allow a malicious user to insert "evil code". This makes the web very unsafe, especially for those who lack a serious antivirus product.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >