Search Results

Search found 10481 results on 420 pages for 'identity insert'.

Page 129/420 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Using the RSSBus Salesforce Excel Add-In From Excel Macros (VBA)

    - by dataintegration
    The RSSBus Salesforce Excel Add-In makes it easy to retrieve and update data from Salesforce from within Microsoft Excel. In addition to the built-in wizards that make data manipulation possible without code, the full functionality of the RSSBus Excel Add-Ins is available programmatically with Excel Macros (VBA) and Excel Functions. This article shows how to write an Excel macro that can be used to perform bulk inserts into Salesforce. Although this article uses the Salesforce Excel Add-In as an example, the same process can be applied to any of the Excel Add-Ins available on our website. Step 1: Download and install the RSSBus Excel Add-In available on our website. Step 2: Open Excel and create place holder cells for the connection details that are needed from the macro. In this article, a spreadsheet will be created for batch inserts, and these cells will store the connection details, and will be used to report the job Id, the batch Id, and the batch status. Step 3: Switch to the Developer tab in Excel. Add a new button on the spreadsheet, and create a new macro associated with it. This macro will contain the code needed to insert a batch of rows into Salesforce. Step 4: Add a reference to the Excel Add-In by selecting Tools --> References --> RSSBus Excel Add-In. The macro functions of the Excel Add-In will be available once the reference has been added. The following code shows how to call a Stored Procedure. In this example, a job is created to insert Leads by calling the CreateJob stored procedure. CreateJob returns a jobId that can be used to upload a large number of Leads in one transaction. Note the use of cells B1, B2, B3, and B4 that were created in Step 2 to read the connection settings from the Excel SpreadSheet and to write out the status of the procedure. methodName = "CreateJob" module.SetProviderName ("Salesforce") nameArray = Array("ObjectName", "Action", "ConcurrencyMode") valueArray = Array("Lead", "insert", "Serial") user = Range("B1").value pass = Range("B2").value atoken = Range("B3").value If (Not user = "" And Not pass = "" And Not atoken = "") Then module.SetConnectionString ("User=" + user + ";Password=" + pass + ";Access Token=" + atoken + ";") If module.CallSP(methodName, nameArray, valueArray) Then Dim ColumnCount As Integer ColumnCount = module.GetColumnCount Dim idIndex As Integer For Count = 0 To ColumnCount - 1 Dim colName As String colName = module.GetColumnName(Count) If module.GetColumnName(Count) = "id" Then idIndex = Count End If Next While (Not module.EOF) Range("B4").value = module.GetValue(idIndex) module.MoveNext Wend Else MsgBox "The CreateJob query failed." End If Exit Sub Else MsgBox "Please specify the connection details." Exit Sub End If Error: MsgBox "ERROR: " & Err.Description Step 5: Add the code to your macro. If you use the code above, you can check the results at Salesforce.com. They can be seen at Administration Setup -> Monitoring -> Bulk Data Load Jobs. Download the attached sample file for a more complete demo. Distributing an Excel File With Macros An Excel file with macros is saved using the .xlms extension. The code for the macro remains in the Excel file, and you can distribute your Excel file to any machine where the RSSBus Salesforce Excel Add-In is already installed. Macro Sample File Please download the fully functional sample excel file that includes the code referenced here. You will also need the RSSBus Excel Add-In to make the connection. You can download a free trial here. Note: You may get an error message stating: "Can't find project or library." in Excel 2007, since this example is made using Excel 2010. To resolve this, navigate to Tools -> References and uncheck the "MISSING: RSSBus Excel Add-In", then scroll down and check the "RSSBus Excel Add-In" listed below it.

    Read the article

  • SQL SERVER – Updating Data in A Columnstore Index

    - by pinaldave
    So far I have written two articles on Columnstore Indexes, and both of them got very interesting readership. In fact, just recently I got a query on my previous article on Columnstore Index. Read the following two articles to get familiar with the Columnstore Index. They will give you a reference to the question which was asked by a certain reader: SQL SERVER – Fundamentals of Columnstore Index SQL SERVER – How to Ignore Columnstore Index Usage in Query Here is the reader’s question: ” When I tried to update my table after creating the Columnstore index, it gives me an error. What should I do?” When the Columnstore index is created on the table, the table becomes Read-Only table and it does not let any insert/update/delete on the table. The basic understanding is that Columnstore Index will be created on the table that is very huge and holds lots of data. If a table is small enough, there is no need to create a Columnstore index. The regular index should just help it. The reason why Columnstore index was needed is because the table was so big that retrieving the data was taking a really, really long time. Now, updating such a huge table is always a challenge by itself. If the Columnstore Index is created on the table, and the table needs to be updated, you need to know that there are various ways to update it. The easiest way is to disable the Index and enable it. Consider the following code: USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO /* It will throw following error Msg 35330, Level 15, State 1, Line 2 UPDATE statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, then rebuilding the columnstore index after UPDATE is complete. */ A similar error also shows up for Insert/Delete function. Here is the workaround. Disable the Columnstore Index and performance update, enable the Columnstore Index: -- Disable the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] DISABLE GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO -- Rebuild the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] REBUILD GO This time it will not throw an error while the update of the table goes successfully. Let us do a cleanup of our tables using this code: -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In the next post we will see how we can use Partition to update the Columnstore Index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • Collision detection via adjacent tiles - sprite too big

    - by BlackMamba
    I have managed to create a collision detection system for my tile-based jump'n'run game (written in C++/SFML), where I check on each update what values the surrounding tiles of the player contain and then I let the player move accordingly (i. e. move left when there is an obstacle on the right side). This works fine when the player sprite is not too big: Given a tile size of 5x5 pixels, my solution worked quite fine with a spritesize of 3x4 and 5x5 pixels. My problem is that I actually need the player to be quite gigantic (34x70 pixels given the same tilesize). When I try this, there seems to be an invisible, notably smaller boundingbox where the player collides with obstacles, the player also seems to shake strongly. Here some images to explain what I mean: Works: http://tinypic.com/r/207lvfr/8 Doesn't work: http://tinypic.com/r/2yuk02q/8 Another example of non-functioning: http://tinypic.com/r/kexbwl/8 (the player isn't falling, he stays there in the corner) My code for getting the surrounding tiles looks like this (I removed some parts to make it better readable): std::vector<std::map<std::string, int> > Game::getSurroundingTiles(sf::Vector2f position) { // converting the pixel coordinates to tilemap coordinates sf::Vector2u pPos(static_cast<int>(position.x/tileSize.x), static_cast<int>(position.y/tileSize.y)); std::vector<std::map<std::string, int> > surroundingTiles; for(int i = 0; i < 9; ++i) { // calculating the relative position of the surrounding tile(s) int c = i % 3; int r = static_cast<int>(i/3); // we subtract 1 to place the player in the middle of the 3x3 grid sf::Vector2u tilePos(pPos.x + (c - 1), pPos.y + (r - 1)); // this tells us what kind of block this tile is int tGid = levelMap[tilePos.y][tilePos.x]; // converts the coords from tile to world coords sf::Vector2u tileRect(tilePos.x*5, tilePos.y*5); // storing all the information std::map<std::string, int> tileDict; tileDict.insert(std::make_pair("gid", tGid)); tileDict.insert(std::make_pair("x", tileRect.x)); tileDict.insert(std::make_pair("y", tileRect.y)); // adding the stored information to our vector surroundingTiles.push_back(tileDict); } // I organise the map so that it is arranged like the following: /* * 4 | 1 | 5 * -- -- -- * 2 | / | 3 * -- -- -- * 6 | 0 | 7 * */ return surroundingTiles; } I then check in a loop through the surrounding tiles, if there is a 1 as gid (indicates obstacle) and then check for intersections with that adjacent tile. The problem I just can't overcome is that I think that I need to store the values of all the adjacent tiles and then check for them. How? And may there be a better solution? Any help is appreciated. P.S.: My implementation derives from this blog entry, I mostly just translated it from Objective-C/Cocos2d.

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Customizing the Test Status on the TFS 2010 SSRS Stories Overview Report

    - by Bob Hardister
    This post shows how to customize the SQL query used by the Team Foundation Server 2010 SQL Server Reporting Services (SSRS) Stories Overview Report. The objective is to show test status for the current version while including user story status of the current and prior versions.  Why? Because we don’t copy completed user stories into the next release. We only want one instance of a user story for the product because we believe copies can get out of sync when they are supposed to be the same. In the example below, work items for the current version are on the area path root and prior versions are not on the area path root. However, you can use area path or iteration path criteria in the query as suits your needs. In any case, here’s how you do it: 1. Download a copy of the report RDL file as a backup 2. Open the report by clicking the edit down arrow and selecting “Edit in Report Builder” 3. Right click on the dsOverview Dataset and select Dataset Properties 4. Update the following SQL per the comments in the code: Customization 1 of 3 … -- Get the list deliverable workitems that have Test Cases linked DECLARE @TestCases Table (DeliverableID int, TestCaseID int); INSERT @TestCases     SELECT h.ID, flh.TargetWorkItemID     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID                 AND flh.WorkItemLinkTypeSK = @TestedByLinkTypeSK                 AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi ON flh.TargetWorkItemID = wi.[System_ID]                  AND wi.[System_WorkItemType] = @TestCase             AND wi.ProjectNodeGUID  = @ProjectGuid              --  Customization 1 of 3: only include test status information when test case area path = root. Added the following 2 statements              AND wi.AreaPath = '{the root area path of the team project}'  …          Customization 2 of 3 … -- Get the Bugs linked to the deliverable workitems directly DECLARE @Bugs Table (ID int, ActiveBugs int, ResolvedBugs int, ClosedBugs int, ProposedBugs int) INSERT @Bugs     SELECT h.ID,         SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,         SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,         SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,         SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID             AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi             ON wi.[System_WorkItemType] = @Bug             AND wi.[System_Id] = flh.TargetWorkItemID             AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)             AND wi.[ProjectNodeGUID] = @ProjectGuid              --  Customization 2 of 3: only include test status information when test case area path = root. Added the following statement              AND wi.AreaPath = '{the root area path of the team project}'       GROUP BY h.ID … Customization 2 of 3 … -- Add the Bugs linked to the Test Cases which are linked to the deliverable workitems -- Walks the links from the user stories to test cases (via the tested by link), and then to -- bugs that are linked to the test case. We don't need to join to the test case in the work -- item history view. -- --    [WIT:User Story/Requirement] --> [Link:Tested By]--> [Link:any type] --> [WIT:Bug] INSERT @Bugs SELECT tc.DeliverableID,     SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,     SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,     SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,     SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed FROM @TestCases tc     JOIN FactWorkItemLinkHistory flh         ON flh.SourceWorkItemID = tc.TestCaseID         AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)         AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK     JOIN [CurrentWorkItemView] wi         ON wi.[System_Id] = flh.TargetWorkItemID         AND wi.[System_WorkItemType] = @Bug         AND wi.[ProjectNodeGUID] = @ProjectGuid         --  Customization 3 of 3: only include test status information when test case area path = root. Added the following statement         AND wi.AreaPath = '{the root area path of the team project}'     GROUP BY tc.DeliverableID … 5. Save the report and you’re all set. Note: you may need to re-apply custom parameter changes like pre-selected sprints.

    Read the article

  • I am trying to create an windows application watcher? [migrated]

    - by Broken_Code
    I recently started coding in c #(in may this year) and well I find it best to learn by working with code. this application http://www.c-sharpcorner.com/UploadFile/satisharveti/ActiveApplicationWatcher01252007024921AM/ActiveApplicationWatcher.aspx. I am trying to recreate it however mine will be saving the information into an sql database(new at this as well). I am having some coding problems though as it does not do what I expect it to do. THis is the main code I am using. private void GetTotalTimer() { DateTime now = DateTime.Now; IntPtr hwnd = APIFunc.getforegroundWindow(); Int32 pid = APIFunc.GetWindowProcessID(hwnd); Process p = Process.GetProcessById(pid); appName = p.ProcessName; const int nChars = 256; int handle = 0; StringBuilder Buff = new StringBuilder(nChars); handle = GetForegroundWindow(); appltitle = APIFunc.ActiveApplTitle().Trim().Replace("\0", ""); //if (GetWindowText(handle, Buff, nChars) > 0) //{ // string strbuff = Buff.ToString(); // StrWindow = strbuff; #region insert statement try { if (Conn.State == ConnectionState.Closed) { Conn.Open(); } if (Conn.State == ConnectionState.Open) { SqlCommand com = new SqlCommand("Select top 1 [Window Title] From TimerLogs ORDER BY [Time of Event] DESC", Conn); SqlDataReader reader = com.ExecuteReader(); startTime = DateTime.Now; string time = now.ToString(); if (!reader.HasRows) { reader.Close(); cmd = new SqlCommand("insert into [TimerLogs] values(@time,@appName,@appltitle,@Elapsed_Time,@userName)", Conn); cmd.Parameters.AddWithValue("@time", time); cmd.Parameters.AddWithValue("@appName", appName); cmd.Parameters.AddWithValue("@appltitle", appltitle); cmd.Parameters.AddWithValue("@Elapsed_Time", blank.ToString()); cmd.Parameters.AddWithValue("@userName", userName); cmd.ExecuteNonQuery(); Conn.Close(); } else if(reader.HasRows) { reader.Read(); if (appltitle != reader.ToString()) { reader.Close(); endTime = DateTime.Now; appduration = endTime.Subtract(startTime); cmd = new SqlCommand("insert into [TimerLogs] values (@time,@appName,@appltitle,@Elapsed_Time,@userName)", Conn); cmd.Parameters.AddWithValue("@time", time); cmd.Parameters.AddWithValue("@appName", appName); cmd.Parameters.AddWithValue("@appltitle", appltitle); cmd.Parameters.AddWithValue("@Elapsed_Time", appduration.ToString()); cmd.Parameters.AddWithValue("@userName", userName); cmd.ExecuteNonQuery(); reader.Close(); Conn.Close(); } } } } catch (Exception) { } //} #endregion ActivityTimer.Start(); Processing = "Working"; } Unfortunately this is the result. it is not saving the data as I expect it to. What am i doing wrong I had thought that with the sql reader it would first check for a value and only save if they do not match however it is saving whether there is a match or not.

    Read the article

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • Windows 2012/IIS 8 + ASP.NET MVC Applicaiton 403.14 (Forbidden) - The Web server is configured to not list the contents

    - by WiredPrairie
    I have a very simple MVC 4 application I'm trying to deploy to a Windows 2012 server. Inconsistently, when navigating to the root of the web application (http://localhost/app), it returns a 403.14-Forbidden: Detailed Error Information: Module: DirectoryListingModule Notification: ExecuteRequestHandler Handler: StaticFile Error Code: 0x00000000 Requested URL: http://localhost:80/test1/ Physical Path: c:\apps\test1\ Logon Method: Negotiate The web application is: Is a very vanilla VS2012 MVC4 Intranet template -- with only a tweak to a label to prove things were working. runs in an Integrated v4.0 application pool setup to use Windows authentication application pool has a custom AD Identity assigned (so it can gain access to a SQL server) application pool identity has read permissions in the c:\apps\test1 folder in which it is running It's an MVC4 application, targeting .NET 4.0 currently -There's no default document in an MVC4 application (like a default.aspx), as there shouldn't need to be one. I don't want to enable directory listings (as that's not the real error). Installed: Roles / Web Server (IIS) / Appliation Development / (.NET 4.5 Extensibility, Application Initialization, ASP.NET 4.5, ISAP Extensions, ISAPI Filters, WebSocket Protocol) Works locally on my machine in IISExpress on Windows 8 Has configured in web.config: <modules runAllManagedModulesForAllRequests="true" /> is set to precompiled during publish When I change the precompiled option to false, the web application does not fail (in my testing at least, it seems to work consistently). The reason I say it's inconsistent is that I've seen it work, then I've published, and the error returns. I can't find a pattern to the issue (and right now, I haven't been able to get it work again, at all). The 403 is returned from a local or remote web browser. I've had trouble finding a solution that isn't intended for older versions of Windows (like suggestions to reinstall ASP.NET which won't work on Windows 2012). I really don't know what else to try.

    Read the article

  • Install a web certificate on an Android device

    - by martani_net
    To gain access to WIFI at university I have to login with my user/pass credentials. The certificate of their website (the local home page that asks for the credentials) is not recognized as a trusted certificate, so we install it separately on our computers. The problem is that I don't take my laptop with me often to university, so I usually want to connect using my HTC Magic, but I have no clue on how to install the certificate separately on Android, it is always rejected. [Edit2] : this is what is stated in their website Need for installation of official certificates CyberTrust validated by the CRU (http://www.cru.fr/wiki/scs/) The certificates contain information certified to generate encryption keys for data exchange, called "sensitive" as the password of a user. By connecting to CanalIP-UPMC, for example, the user must validate the identity of the server accepting the certificate appears on the screen in a "popup window". In reality, the user is unable to validate a certificate knowing, because a simple visual check of the license is impossible. Therefore, the certificates of the certification authority (CRU-Cybertrust Educationnal-ca.ca Cybertrust and-global-root-ca.ca) must be installed prior to the browser for the validity of the certificate server can be controlled automatically. Before you connect to the network-UPMC CanalIP you must register in your browser through the certification authority Cybertrust-Educationnal-ca.ca Download the Cybertrust-Educationnal-ca.ca, depending on your browser and select the link below : With Internet Explorer, click on the link following. With Firefox, click on the link following. With Safari, click the link following. If this procedure is not respected, a real risk is incurred by the user: that of being robbed password LDAP directory UPMC. A malicious server may in fact try very easily attack type "man-in-the-middle" by posing as the legitimate server at UPMC. The theft of a password allows the attacker to steal an identity for transactions over the Internet can engage the responsibility of the user trapped ... This is their website : http://www.canalip.upmc.fr/doc/Default.htm (in French, Google-translate it :)) Anyone knows how to install a web certificate on Android?

    Read the article

  • Side-By-Side Configuration Error VC90.CRT

    - by Swiss
    I keep receiving the following error when trying to run MikTeX 2.8 or Visual Studio 2008 on 64-Bit Windows Vista. It's particularly odd because both programs were working problem free until a few days ago. The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail. Opening the Application log provides the following information: Activation context generation failed for "C:\Program Files (x86)\MiKTeX 2.8\miktex\bin\texworks.exe". Error in manifest or policy file "C:\Program Files (x86)\MiKTeX 2.8\miktex\bin\Microsoft.VC90.CRT.MANIFEST" on line 4. Component identity found in manifest does not match the identity of the component requested. Reference is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.4148". Definition is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.1". Please use sxstrace.exe for detailed diagnosis. It looks like the problem is with Microsoft.VC90.CRT.MANIFEST, but I am not sure why or how to fix this problem. I have tried uninstalling/reinstalling Visual Studio and MikTeX, as well as uninstalling/reinstalling Microsoft's C++ Redistributable, but nothing seems to be fixing this problem.

    Read the article

  • Cannot connect to my EC2 instance because of "Permission denied (publickey)"

    - by Burak
    In AWS console, I saw that my key pair was deleted. I created a new one with the same name. Then I tried to connect with ssh -v -i sohoKey.pem ec2-user@******.compute-1.amazonaws.com Here's the output: macs-MacBook-Air:~ mac$ ssh -v -i sohoKey.pem ec2-user@******.compute-1.amazonaws.com OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to ********.compute-1.amazonaws.com [*****] port 22. debug1: Connection established. debug1: identity file sohoKey.pem type -1 debug1: identity file sohoKey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '*******.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/mac/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: sohoKey.pem debug1: Authentications that can continue: publickey debug1: Trying private key: sohoKey.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Update: I detached my old EBS and attached to the new instance. Now, how can I mount it?

    Read the article

  • Authentication required by wireless network.

    - by Roman
    I would like to use a wireless network from Ubuntu. In the network drop-down menu I select a network (this is a University network I have an account there). Then I get a windows with the following fields: Wireless Security: [WPA&WPA2 Enterprise] Authentication: [Tunneled TLS] Anonymous Identity: [] CA Certificate: [(None)] Inner Authentication: [some letters] User Name: [] Password: [] I put there my user name and password and do not change default value and leave "Anonymous Identity"blank. As a result of that I get "Authentication required by wireless network". How can I solve this problem? I think it is important to notice that our system administrator tried to find some files (which are probably needed to be used as "CA Certificate"). He said that he does not know where this file is located on Ubuntu (he support only Windows). So, probably this is direction I need to go. I need to find this file. But may be I am wrong. May be something else needs to be done. Could you pleas help me with that?

    Read the article

  • Problem with creation of scheduled task from IIS6 on SR2003

    - by Morten Louw Nielsen
    Hi, I have also posted this question on stackoverflow, but will also try here, since it might be more system-related I am writing a webapplication using .NET. The webapp creates scheduled tasks using the System.Diagnostics.Process class, calling SCHTASKS.EXE with parameters. I have changed the identity on the app pool, to a specific domain user. The domain-user is local administrator on all the four webservers. From webserver01 I am creating tasks on webserver01 to webserver04. It works perfect for 3-5 days, but then it breaks. It gives me the following errormessage in a messagebox: "The application failed to initialize properly (0xc0000142). Click on OK to terminate the application." If I have the system in the broken state, and I change the identity of the app pool to Domain administrator, it works. As I change it back to my domain-user, it breaks again. If I reboot the server, it works again for the same amount of days, but will break again. It seems like a permission-related problem. I just don't understand why it works sometimes, and sometimes doesn't. I hope someone outthere has seen this problem! Looking forward to hear from you! Kind regards, Morten, Denmark

    Read the article

  • Sshfs is not working..

    - by Devrim
    Hi, When I run sshpass -p 'mypass' sshfs 'root'@'68.19.40.16':/ '/dir' -o StrictHostKeyChecking=no,debug It successfully mounts but it runs on foreground. When I run without 'debug' parameter, it doesn't mount at all. Server is ubuntu 8.04 Any ideas why? UPDATE: When I run the command as ROOT it does mount. It doesn't work with other users. here is the output of an unsuccessful mount $ sshpass -p 'pass' sshfs 'root'@'68.1.1.1':/ '/s6' -o StrictHostKeyChecking=no,sshfs_debug,loglevel=debug debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 68.1.1.1 [68.1.1.1] port 22. debug1: Connection established. debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa type -1 debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY Warning: Permanently added '68.1.1.1' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.UTF-8 debug1: Sending subsystem: sftp Server version: 3 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: Killed by signal 1.

    Read the article

  • Can't login via ssh after upgrading to Ubuntu 12.10

    - by user42899
    I have an Ubuntu 12.04LTS instance on AWS EC2 and I upgraded it to 12.10 following the instructions at https://help.ubuntu.com/community/QuantalUpgrades. After upgrading I can no longer ssh into my VM. It isn't accepting my ssh key and my password is also rejected. The VM is running, reachable, and SSH is started. The problem seems to be about the authentication part. SSH has been the only way for me to access that VM. What are my options? ubuntu@alice:~$ ssh -v -i .ssh/sos.pem [email protected] OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/ubuntu/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to www.hostname.com [37.37.37.37] port 22. debug1: Connection established. debug1: identity file .ssh/sos.pem type -1 debug1: identity file .ssh/sos.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: RSA 33:33:33:33:33:33:33:33:33:33:33:33:33:33 debug1: Host '[www.hostname.com]:22' is known and matches the RSA host key. debug1: Found key in /home/ubuntu/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: .ssh/sos.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentications that can continue: publickey,password Permission denied, please try again.

    Read the article

  • SSH login very slow on OS X Leopard

    - by acjohnson55
    My SSH sessions take a very long time to initiate. This applies for logins with and without passwords, interactive and non-interactive. I have tried setting 'GSSAPIAuthentication no' and 'IPQoS 0x00' on the client side, and 'UseDNS no' on the server side, but no dice. I'm really stumped and frustrated. The worst part is that it SFTP takes forever to establish connections too, making file transfer much longer than it would be otherwise. I thought the problem might be something with PAM, because of where the hang is in the sshd log below, so I tried commenting out each line one-by-one in the /etc/pam.d/sshd file. Some caused login to be impossible, some had no apparent effect. I can't really tell if PAM is stalling for other services, but I can say that su'ing into my account from another account with 'su -l' has no apparent delay. I tried creating a new user account, just to see if there was something wrong with my existing account, and the same problem persisted. Any ideas of what's going on? On the client side, the most verbose mode outputs (redacted where reasonable): OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data ... debug1: ... line 1: Applying options for ... debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ... [x.x.x.x] port 22. debug1: Connection established. debug1: identity file /.../.ssh/id_rsa type -1 debug1: identity file /.../.ssh/id_rsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/.../.ssh/id_dsa" as a RSA1 public key debug1: identity file /.../.ssh/id_dsa type 2 debug1: identity file /.../.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 136/256 debug2: bits set: 523/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ... debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "x.x.x.x" from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug1: Host '...' is known and matches the RSA host key. debug1: Found key in /.../.ssh/known_hosts:9 debug2: bits set: 492/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /.../.ssh/id_dsa (0x7f8b7b41d6c0) debug2: key: /.../.ssh/id_rsa (0x0) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering DSA public key: /.../.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-dss blen 434 debug2: input_userauth_pk_ok: fp ... debug3: sign_and_send_pubkey: DSA ... debug1: Authentication succeeded (publickey). Authenticated to ... ([x.x.x.x]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. ****** Hangs here ****** debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env TERM_PROGRAM debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env PATH debug3: Ignored env MKL_NUM_THREADS debug3: Ignored env PWD debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env DYLD_LIBRARY_PATH debug3: Ignored env PYTHONPATH debug3: Ignored env LOGNAME debug3: Ignored env DISPLAY debug3: Ignored env SECURITYSESSIONID debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 On the server side, the debug output looks like: Sep 16 18:46:40 ... sshd[31435]: debug1: inetd sockets after dupping: 3, 4 Sep 16 18:46:40 ... sshd[31435]: Connection from x.x.x.x port 52758 Sep 16 18:46:40 ... sshd[31435]: debug1: Current Session ID is 56AC0FB0 / Session Attributes are 00008000 Sep 16 18:46:40 ... sshd[31435]: debug1: Running in inetd mode in a non-root session... assuming inetd created the session for us. Sep 16 18:46:40 ... sshd[31435]: debug1: Client protocol version 2.0; client software version OpenSSH_5.9 Sep 16 18:46:40 ... sshd[31435]: debug1: match: OpenSSH_5.9 pat OpenSSH* Sep 16 18:46:40 ... sshd[31435]: debug1: Enabling compatibility mode for protocol 2.0 Sep 16 18:46:40 ... sshd[31435]: debug1: Local version string SSH-2.0-OpenSSH_5.2 Sep 16 18:46:40 ... sshd[31435]: debug1: Checking with Service ACLs for ssh login restrictions Sep 16 18:46:40 ... sshd[31435]: debug1: call to mbr_user_name_to_uuid with <...> suceeded to retrieve user_uuid Sep 16 18:46:40 ... sshd[31435]: debug1: Call to mbr_check_service_membership failed with status <0> Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: initializing for "..." Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: setting PAM_RHOST to "x.x.x.x" Sep 16 18:46:40 ... sshd[31435]: Failed none for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: ssh_dss_verify: signature correct Sep 16 18:46:40 ... sshd[31435]: debug1: do_pam_account: called Sep 16 18:46:40 ... sshd[31435]: Accepted publickey for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: monitor_child_preauth: ... has been authenticated by privileged process Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: establishing credentials ***** Hangs here ***** Sep 16 18:46:54 ... sshd[31435]: User child is on pid 31654 Sep 16 18:46:54 ... sshd[31654]: debug1: PAM: establishing credentials Sep 16 18:46:54 ... sshd[31654]: debug1: permanently_set_uid: 509/20 Sep 16 18:46:54 ... sshd[31654]: debug1: Entering interactive session for SSH2. Sep 16 18:46:54 ... sshd[31654]: debug1: server_init_dispatch_20 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 Sep 16 18:46:54 ... sshd[31654]: debug1: input_session_request Sep 16 18:46:54 ... sshd[31654]: debug1: channel 0: new [server-session] Sep 16 18:46:54 ... sshd[31654]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: session 0: link with channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: confirm session Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_global_request: rtype [email protected] want_reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request pty-req reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req pty-req Sep 16 18:46:54 ... sshd[31654]: debug1: Allocating pty. Sep 16 18:46:54 ... sshd[31435]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_pty_req: session 0 alloc /dev/ttys008 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request env reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req env Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request shell reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req shell Sep 16 18:46:54 ... sshd[31655]: debug1: Setting controlling tty using TIOCSCTTY.

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • ssh without password does not work for some users

    - by joshxdr
    I have a new RHEL4 Linux box that I am using to copy data to old Solaris 2.6 and RHEL3 Linux boxes with scp. I have found that with the same setup, it works for some users but not for others. For user jane, this works fine: jane@host1$ ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/osborjo/.ssh/identity debug1: Offering public key: /mnt/home/osborjo/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). for user jack it does not: jack@host1 ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/oper1/.ssh/identity debug1: Offering public key: /mnt/home/oper1/.ssh/id_rsa debug1: Authentications that can continue: publickey,password,keyboard-interactive I have looked at the permissions for all the keys and files, they look the same. Since I am using home directories mounted by NFS, the keys for both the remote host and the local host are in the same directory. This is how things look for jane: jane@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jane operator 394 Jan 27 16:28 authorized_keys -rw------- 1 jane operator 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jane operator 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jane operator 1205 Jan 27 16:46 known_hosts For user jack: jack@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jack engineer 394 Jan 27 16:28 authorized_keys -rw------- 1 jack engineer 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jack engineer 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jack engineer 1205 Jan 27 16:46 known_hosts As a last ditch effort, I copied the authorized_keys, id_rsa, and id_rsa.pub from jill to jack, and changed the username in authorized_keys and id_rsa.pub with vi. It still did not work. It seems there is something different between the two users but I cannot figure out what it is.

    Read the article

  • Setting up Live @ EDU

    - by user73721
    [PROBLEM] Hello everyone. I have a small issue here. We are trying to get our exchange accounts for students only ported over from an exchange server 2003 to the Microsoft cloud services known as live @ EDU. The problem we are having is that in order to do this we need to install 2 pieces of software 1: OLSync 2: Microsoft Identity Life cycle Manager "Download the Galsync.msi here" the "Here" link takes you to a page that needs a login for an admin account for live @ EDU. That part works. However once logged in it redirects to a page that states: https://connect.microsoft.com/site185/Downloads/DownloadDetails.aspx?DownloadID=26407 Page Not Found The content that you requested cannot be found or you do not have permission to view it. If you believe you have reached this page in error, click the Help link at the top of the page to report the issue and include this ID in your e-mail: afa16bf4-3df0-437c-893a-8005f978c96c [WHAT I NEED] I need to download that file. Does anyone know of an alternative location for that installation file? I also need to obtain Identity Lifecycle Management (ILM) Server 2007, Feature Pack 1 (FP1). If anyone has any helpful information that would be fantastic! As well if anyone has completed a migration of account from a on site exchange 2003 server to the Microsoft Live @ EDU servers any general guidance would be helpful! Thanks in advance.

    Read the article

  • ADFS v.2.0 transitive trust in a federation scenario

    - by masi
    Currently i'm working with ADFS to establish a federated trust between two separated domains. My question is simple: does ADFS v. 2.0 support transitive trust across federated identity providers? I know that ADFS v 1.0 does not, as stated in this document on page 9. But when looking on the claims rules that come with ADFS 2.0 it seems to be possible, as a Microsoft partner confirmed. However: the documentation on this topic is a mess! Simply no ADFS v. 2.0 related statements on this topic that i was able to find (IF you got any documentation on this PLEASE help me out guys!). To be more clear, lets assume this scenario: Federation provider (A) trust federation provider (B) which trusts identity provider (C). So, does (A) trust identities comming from (C) across (B)? Also, if it is possible there are some things that i'm specially interested in: Is it possible to restrict the transitive trust in ADFS in any way? If so, how? How does the transitive trust affect the Issuer and OriginalIssuer properties of the claims? If transitive trust is used together with claims transformations and provider (B) would transform incomming claims from (C) in a way that they are transformed into (new) claims of same type an value, how would this affect the Issuer and OriginalIssuer properties?

    Read the article

  • Deleting "undeletable" files in Vista

    - by Nik Reiman
    I recently upgraded my workstation from XP SP3 to Vista Business, and during the upgrade Windows moved my old C:\Windows directory to C:\Windows.old. I got all of the stuff I needed out of that folder, but there are six "undeletable" files there so I cannot remove it. They are: Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-H Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-V Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\pdfshell.dll Whenever I try to delete the files either through explorer or a command line, I get a permission denied error. I have tried to grant myself full permission on the files, but again, permission denied. I don't even have acrobat installed on my Vista machine, and I uninstalled Adobe updater. However, I still can't manage to get rid of these files. How do I nuke them for good? Edit: I was able to take ownership of the files, but I still can't delete them. Renaming them did not work, as I was denied permission to do that as well. I'll try booting up in safe mode and getting rid of them there. Edit II: Booting up into safe mode did not allow me to delete the files. Bummer.

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >