Search Results

Search found 3315 results on 133 pages for 'magic packet'.

Page 80/133 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • What does path finding in internet routing do and how is it different from A*?

    - by alan2here
    Note: If you don't understand this question then feel free to ask clarification in the comments instead of voting down, it might be that this question needs some more work at the moment. I've been directed here from the Stack Excange chat room Root Access because my question didn't fit on Super User. In many aspects path finding algorithms like A star are very similar to internet routing. For example: A node in an A* path finding system can search for a path though edges between other nodes. A router that's part of the internet can search for a route though cables between other routers. In the case of A*, open and closed lists are kept by the system as a whole, sepratly from any individual node as well as each node being able to temporarily store a state involving several numbers. Routers on the internet seem to have remarkable properties, as I understand it: They are very performant. New nodes can be added at any time that use a free address from a finite (not tree like) address space. It's real routing, like A*, there's never any doubling back for example. Similar IP addresses don't have to be geographically nearby. The network reacts quickly to changes to the networks shape, for example if a line is down. Routers share information and it takes time for new IP's to be registered everywhere, but presumably every router doesn't have to store a list of all the addresses each of it's directions leads most directly to. I'm looking for a basic, general, high level description of the algorithms workings from the point of view of an individual router. Does anyone have one? I presume public internet routers don't use A* as the overheads would be to large, and scale to poorly. I also presume there is a single method worldwide because it seems as if must involve a lot of transferring data to update and communicate a reasonable amount of state between neighboring routers. For example, perhaps the amount of data that needs to be stored in each router scales logarithmically with the number of routers that exist worldwide, the detail and reliability of the routing is reduced over increasing distances, there is increasing backtracking involved in parts of the network that are less geographically uniform or maybe each router really does perform an A* style search, temporarily maintaining open and closed lists when a packet arrives.

    Read the article

  • Can't access some websites using Ubuntu 13.10

    - by Adame Doe
    Something's wrong with Ubuntu. Since I've upgraded to 13.10, I can't access some websites for no apparent reason. I've tried everything imaginable to solve this problem : Made sure that MTUs are the same, Disabled IPv6 in both the network manager and used browsers, Deactivated my network keys, DMZed my computer, Used other DNS like Google and OpenDNS, Checked that no firewall was running my computer ... And it's the same result. I even tried to reinstall Ubuntu a couple of times, but no luck. The most annoying thing about it is I can't access wordpress.org! So, there's no way it could be an ISP restriction of some kind. When I use a VPN, I can access pretty much anything. I'm really frustrated because I have to use wordpress.org very often. Any clue? ifconfig adame@adame-ws:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:3d:b0:7c inet addr:10.42.0.1 Bcast:10.42.0.255 Mask:255.255.255.0 inet6 addr: fe80::226:18ff:fe3d:b07c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8024 errors:0 dropped:0 overruns:0 frame:0 TX packets:7966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:684480 (684.4 KB) TX bytes:616608 (616.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:8222 errors:0 dropped:0 overruns:0 frame:0 TX packets:8222 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:568269 (568.2 KB) TX bytes:568269 (568.2 KB) wlan0 Link encap:Ethernet HWaddr 00:19:70:40:85:eb inet addr:192.168.2.3 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::219:70ff:fe40:85eb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1464 Metric:1 RX packets:123705 errors:0 dropped:0 overruns:0 frame:0 TX packets:98141 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:94963545 (94.9 MB) TX bytes:10387470 (10.3 MB) /etc/hosts 127.0.0.1 localhost 127.0.1.1 adame-ws ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters tracepath wordpress.org 1: adame-ws.local 0.092ms pmtu 1500 1: 192.168.2.1 1.300ms asymm 2 1: 192.168.2.1 1.060ms asymm 2 2: no reply 3: no reply 4: no reply 5: no reply 6: no reply 7: no reply 8: no reply ... keep on going like that ping wordpress.org adame@adame-ws:~$ ping wordpress.org PING wordpress.org (66.155.40.250) 56(84) bytes of data. --- wordpress.org ping statistics --- 10 packets transmitted, 0 received, 100% packet loss, time 9071ms

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • CodePlex Daily Summary for Tuesday, January 04, 2011

    CodePlex Daily Summary for Tuesday, January 04, 2011Popular ReleasesMini Memory Dump Diagnosis using Asp.Net: MiniDump HealthMonitor: Enable developers to generate mini memory dumps in case of unhandled exceptions or for specific exception scenarios without using any external tools , only using Asp.Net 2.0 and above. Many times in production , QA Servers the issues require post-mortem or low-level system debugging efforts. This Memory dump generator will help in those scenarios.EnhSim: EnhSim 2.2.9 BETA: 2.2.9 BETAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the Gobl...xUnit.net - Unit Testing for .NET: xUnit.net 1.7 Beta: xUnit.net release 1.7 betaBuild #1533 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision)...Json.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14977.000: Prerequisites: ============== o Visual Studio 2008 / Visual Studio 2010 o ReSharper 5.1.1753.4 o StyleCop 4.4.1.2 Preview This release adds no new features, has bug fixes around performance and unhandled errors reported on YouTrack.BloodSim: BloodSim - 1.3.1.0: - Restructured simulation log back end to something less stupid to drastically reduce simulation time and memory usage - Removed a debug log entry that was left over from testing of 1.3.0.0 - Fixed a rounding and calculation error with Haste rating - Added option for Rune of SwordshatteringDbDocument: DbDoc Initial Version: DbDoc Initial versionASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide Kind Of Magic MSBuild Task: Beta 4: Update to keep up with latest bug fixes. To those who don't like Magic/NoMagic attributes, you may change these names in KindOfMagic.targets file: Change this line: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)"/> to something like this: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)" MagicAttribute="MyMagicAttribute" NoMagicAttribute="MyNoMagicAttribute"/>N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedAutoLoL: AutoLoL v1.5.1: Fix: Fixed a bug where pressing Save As would not select the Mastery Directory by default Unexpected errors are now always reported to the user before closing AutoLoL down.* Extracted champion data to Data directory** Added disclaimer to notify users this application has nothing to do with Riot Games Inc. Updated Codeplex image * An error report will be shown to the user which can help the developers to find out what caused the error, this should improve support ** We are working on ...TortoiseHg: TortoiseHg 1.1.8: TortoiseHg 1.1.8 is a minor bug fix release, with minor improvementsBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...StyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!New Projects3D SharePoint Tag Cloud in Silverlight: This is a Silverlight Tag Cloud intended for use in SharePoint but can be configured for use separately as well. Based on a blog post from Peter Gerritsen (http://blog.petergerritsen.nl/2009/02/14/creating-a-3d-tagcloud-in-silverlight-part-1/).AffinityUI for Unity 3D: AffinityUI makes writing GUI code for Unity 3D easier with a component based API that allows you to write GUIs in a declarative fashion using a fluent interface, update events and powerful two-way databinding. A visual editor and scene GUI support are planned for the future.calendar synchronization: This will enable tight integration between ivvy core and infragistics scheduler. Will utilize oData and native isolated strorage instead of SQLite as planed earlier.Card Games Library: The aim of this project is to offer a framework for creating card games logics and rules. This project is not focused on particular games but offers a felxible and extensible base for creating card games, like; poker, solitaire, rummy, etcDbDocument: DbDoc tool seeks to be helper tool side by side with MS SQL server management studio tool, you can design your DB Tables in visualized way through Diagrams and then use “DbDoc” tool to generate design document in MS Word table formatDelux: Delux project for educationDnEcuDiag - A .NET Ecu Diagnostic Application: DnEcuDiag is an application framework for developing electronic control unit diagnostic functionality without the need to implement code.Dynamics CRM 4.0 Recycle Bin: This add-on for Microsoft Dynamics CRM 4.0 to enable Recycle Bin Features (Restore, Permenant Delete) for CRM users.EPPLib.it: Il sistema sincrono di registrazione e mantenimento dei nomi a dominio del Registro.it utilizza il protocollo EPP (Extensible Provisioning Protocol). EPPLib consente di interfacciare i propri progetti con il sistema sincrono del Registro.it.IoC4Fun: Very LightWeight IoC or DI Container only required Net Framework 3.5 written in C#. For Small Apps or Educational purpose. Not intrusion code added like others Containers. Just Plan Register Dependencies and Let Container resolve magically the Injection Dependencies.ISocial: WP7 Application, centralise les réseaux sociauxJohnnyIssa: begASP.NET Version 4.LiveCPE: LiveCPE is an Academic project based on C# and ASP.Net. It aims to be a social network website. MS CRM - Skype Connector: Skype Addon for Microsoft Dynamics CRM allows to dial CRM Contacts, Lead and so on via Skype. It's developed in ASP.NET and C#.Ryan's Projects: a group of random c# projectsSeleniuMspec: SeleniuMspec is a template for Selenium IDE that generates Selenium tests for Mspec (a .NET BDD framework). SharePoint FAQ Web Part: The SharePoint FAQ Web Part makes it easier for SharePoint Users to display FAQs feature on their SharePoint sites. You'll no longer have to maintain HTML and anchors in the Content Editor Web Part, as the FAQs are now automatically generated from a SharePoint list. Silverlight Planet: Projetos da comunidade Silverlight Planet www.silverlightplanet.net.brWP7 Expense Report: Expense Report for Windows Phone 7xll - the easy way to create Excel add-ins: The xll C++ library makes it easy to create xll add-ins for Excel. It also supports the big grid, wide-character strings, and thread-safe functions. ????????: ????,??,??,????,????,??????

    Read the article

  • C#/.NET Little Wonders: The Useful But Overlooked Sets

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  Today we will be looking at two set implementations in the System.Collections.Generic namespace: HashSet<T> and SortedSet<T>.  Even though most people think of sets as mathematical constructs, they are actually very useful classes that can be used to help make your application more performant if used appropriately. A Background From Math In mathematical terms, a set is an unordered collection of unique items.  In other words, the set {2,3,5} is identical to the set {3,5,2}.  In addition, the set {2, 2, 4, 1} would be invalid because it would have a duplicate item (2).  In addition, you can perform set arithmetic on sets such as: Intersections: The intersection of two sets is the collection of elements common to both.  Example: The intersection of {1,2,5} and {2,4,9} is the set {2}. Unions: The union of two sets is the collection of unique items present in either or both set.  Example: The union of {1,2,5} and {2,4,9} is {1,2,4,5,9}. Differences: The difference of two sets is the removal of all items from the first set that are common between the sets.  Example: The difference of {1,2,5} and {2,4,9} is {1,5}. Supersets: One set is a superset of a second set if it contains all elements that are in the second set. Example: The set {1,2,5} is a superset of {1,5}. Subsets: One set is a subset of a second set if all the elements of that set are contained in the first set. Example: The set {1,5} is a subset of {1,2,5}. If We’re Not Doing Math, Why Do We Care? Now, you may be thinking: why bother with the set classes in C# if you have no need for mathematical set manipulation?  The answer is simple: they are extremely efficient ways to determine ownership in a collection. For example, let’s say you are designing an order system that tracks the price of a particular equity, and once it reaches a certain point will trigger an order.  Now, since there’s tens of thousands of equities on the markets, you don’t want to track market data for every ticker as that would be a waste of time and processing power for symbols you don’t have orders for.  Thus, we just want to subscribe to the stock symbol for an equity order only if it is a symbol we are not already subscribed to. Every time a new order comes in, we will check the list of subscriptions to see if the new order’s stock symbol is in that list.  If it is, great, we already have that market data feed!  If not, then and only then should we subscribe to the feed for that symbol. So far so good, we have a collection of symbols and we want to see if a symbol is present in that collection and if not, add it.  This really is the essence of set processing, but for the sake of comparison, let’s say you do a list instead: 1: // class that handles are order processing service 2: public sealed class OrderProcessor 3: { 4: // contains list of all symbols we are currently subscribed to 5: private readonly List<string> _subscriptions = new List<string>(); 6:  7: ... 8: } Now whenever you are adding a new order, it would look something like: 1: public PlaceOrderResponse PlaceOrder(Order newOrder) 2: { 3: // do some validation, of course... 4:  5: // check to see if already subscribed, if not add a subscription 6: if (!_subscriptions.Contains(newOrder.Symbol)) 7: { 8: // add the symbol to the list 9: _subscriptions.Add(newOrder.Symbol); 10: 11: // do whatever magic is needed to start a subscription for the symbol 12: } 13:  14: // place the order logic! 15: } What’s wrong with this?  In short: performance!  Finding an item inside a List<T> is a linear - O(n) – operation, which is not a very performant way to find if an item exists in a collection. (I used to teach algorithms and data structures in my spare time at a local university, and when you began talking about big-O notation you could immediately begin to see eyes glossing over as if it was pure, useless theory that would not apply in the real world, but I did and still do believe it is something worth understanding well to make the best choices in computer science). Let’s think about this: a linear operation means that as the number of items increases, the time that it takes to perform the operation tends to increase in a linear fashion.  Put crudely, this means if you double the collection size, you might expect the operation to take something like the order of twice as long.  Linear operations tend to be bad for performance because they mean that to perform some operation on a collection, you must potentially “visit” every item in the collection.  Consider finding an item in a List<T>: if you want to see if the list has an item, you must potentially check every item in the list before you find it or determine it’s not found. Now, we could of course sort our list and then perform a binary search on it, but sorting is typically a linear-logarithmic complexity – O(n * log n) - and could involve temporary storage.  So performing a sort after each add would probably add more time.  As an alternative, we could use a SortedList<TKey, TValue> which sorts the list on every Add(), but this has a similar level of complexity to move the items and also requires a key and value, and in our case the key is the value. This is why sets tend to be the best choice for this type of processing: they don’t rely on separate keys and values for ordering – so they save space – and they typically don’t care about ordering – so they tend to be extremely performant.  The .NET BCL (Base Class Library) has had the HashSet<T> since .NET 3.5, but at that time it did not implement the ISet<T> interface.  As of .NET 4.0, HashSet<T> implements ISet<T> and a new set, the SortedSet<T> was added that gives you a set with ordering. HashSet<T> – For Unordered Storage of Sets When used right, HashSet<T> is a beautiful collection, you can think of it as a simplified Dictionary<T,T>.  That is, a Dictionary where the TKey and TValue refer to the same object.  This is really an oversimplification, but logically it makes sense.  I’ve actually seen people code a Dictionary<T,T> where they store the same thing in the key and the value, and that’s just inefficient because of the extra storage to hold both the key and the value. As it’s name implies, the HashSet<T> uses a hashing algorithm to find the items in the set, which means it does take up some additional space, but it has lightning fast lookups!  Compare the times below between HashSet<T> and List<T>: Operation HashSet<T> List<T> Add() O(1) O(1) at end O(n) in middle Remove() O(1) O(n) Contains() O(1) O(n)   Now, these times are amortized and represent the typical case.  In the very worst case, the operations could be linear if they involve a resizing of the collection – but this is true for both the List and HashSet so that’s a less of an issue when comparing the two. The key thing to note is that in the general case, HashSet is constant time for adds, removes, and contains!  This means that no matter how large the collection is, it takes roughly the exact same amount of time to find an item or determine if it’s not in the collection.  Compare this to the List where almost any add or remove must rearrange potentially all the elements!  And to find an item in the list (if unsorted) you must search every item in the List. So as you can see, if you want to create an unordered collection and have very fast lookup and manipulation, the HashSet is a great collection. And since HashSet<T> implements ICollection<T> and IEnumerable<T>, it supports nearly all the same basic operations as the List<T> and can use the System.Linq extension methods as well. All we have to do to switch from a List<T> to a HashSet<T>  is change our declaration.  Since List and HashSet support many of the same members, chances are we won’t need to change much else. 1: public sealed class OrderProcessor 2: { 3: private readonly HashSet<string> _subscriptions = new HashSet<string>(); 4:  5: // ... 6:  7: public PlaceOrderResponse PlaceOrder(Order newOrder) 8: { 9: // do some validation, of course... 10: 11: // check to see if already subscribed, if not add a subscription 12: if (!_subscriptions.Contains(newOrder.Symbol)) 13: { 14: // add the symbol to the list 15: _subscriptions.Add(newOrder.Symbol); 16: 17: // do whatever magic is needed to start a subscription for the symbol 18: } 19: 20: // place the order logic! 21: } 22:  23: // ... 24: } 25: Notice, we didn’t change any code other than the declaration for _subscriptions to be a HashSet<T>.  Thus, we can pick up the performance improvements in this case with minimal code changes. SortedSet<T> – Ordered Storage of Sets Just like HashSet<T> is logically similar to Dictionary<T,T>, the SortedSet<T> is logically similar to the SortedDictionary<T,T>. The SortedSet can be used when you want to do set operations on a collection, but you want to maintain that collection in sorted order.  Now, this is not necessarily mathematically relevant, but if your collection needs do include order, this is the set to use. So the SortedSet seems to be implemented as a binary tree (possibly a red-black tree) internally.  Since binary trees are dynamic structures and non-contiguous (unlike List and SortedList) this means that inserts and deletes do not involve rearranging elements, or changing the linking of the nodes.  There is some overhead in keeping the nodes in order, but it is much smaller than a contiguous storage collection like a List<T>.  Let’s compare the three: Operation HashSet<T> SortedSet<T> List<T> Add() O(1) O(log n) O(1) at end O(n) in middle Remove() O(1) O(log n) O(n) Contains() O(1) O(log n) O(n)   The MSDN documentation seems to indicate that operations on SortedSet are O(1), but this seems to be inconsistent with its implementation and seems to be a documentation error.  There’s actually a separate MSDN document (here) on SortedSet that indicates that it is, in fact, logarithmic in complexity.  Let’s put it in layman’s terms: logarithmic means you can double the collection size and typically you only add a single extra “visit” to an item in the collection.  Take that in contrast to List<T>’s linear operation where if you double the size of the collection you double the “visits” to items in the collection.  This is very good performance!  It’s still not as performant as HashSet<T> where it always just visits one item (amortized), but for the addition of sorting this is a good thing. Consider the following table, now this is just illustrative data of the relative complexities, but it’s enough to get the point: Collection Size O(1) Visits O(log n) Visits O(n) Visits 1 1 1 1 10 1 4 10 100 1 7 100 1000 1 10 1000   Notice that the logarithmic – O(log n) – visit count goes up very slowly compare to the linear – O(n) – visit count.  This is because since the list is sorted, it can do one check in the middle of the list, determine which half of the collection the data is in, and discard the other half (binary search).  So, if you need your set to be sorted, you can use the SortedSet<T> just like the HashSet<T> and gain sorting for a small performance hit, but it’s still faster than a List<T>. Unique Set Operations Now, if you do want to perform more set-like operations, both implementations of ISet<T> support the following, which play back towards the mathematical set operations described before: IntersectWith() – Performs the set intersection of two sets.  Modifies the current set so that it only contains elements also in the second set. UnionWith() – Performs a set union of two sets.  Modifies the current set so it contains all elements present both in the current set and the second set. ExceptWith() – Performs a set difference of two sets.  Modifies the current set so that it removes all elements present in the second set. IsSupersetOf() – Checks if the current set is a superset of the second set. IsSubsetOf() – Checks if the current set is a subset of the second set. For more information on the set operations themselves, see the MSDN description of ISet<T> (here). What Sets Don’t Do Don’t get me wrong, sets are not silver bullets.  You don’t really want to use a set when you want separate key to value lookups, that’s what the IDictionary implementations are best for. Also sets don’t store temporal add-order.  That is, if you are adding items to the end of a list all the time, your list is ordered in terms of when items were added to it.  This is something the sets don’t do naturally (though you could use a SortedSet with an IComparer with a DateTime but that’s overkill) but List<T> can. Also, List<T> allows indexing which is a blazingly fast way to iterate through items in the collection.  Iterating over all the items in a List<T> is generally much, much faster than iterating over a set. Summary Sets are an excellent tool for maintaining a lookup table where the item is both the key and the value.  In addition, if you have need for the mathematical set operations, the C# sets support those as well.  The HashSet<T> is the set of choice if you want the fastest possible lookups but don’t care about order.  In contrast the SortedSet<T> will give you a sorted collection at a slight reduction in performance.   Technorati Tags: C#,.Net,Little Wonders,BlackRabbitCoder,ISet,HashSet,SortedSet

    Read the article

  • Acer aspire 5735z wireless not working after upgrade to 11.10

    - by Jon
    I cant get my wifi card to work at all after upgrading to 11.10 Oneiric. I'm not sure where to start to fix this. Ive tried using the additional drivers tool but this shows that no additional drivers are needed. Before my upgrade I had a drivers working for the Rt2860 chipset. Any help on this would be much appreciated.... thanks Jon jon@ubuntu:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1d:72:ec:76:d5 inet addr:192.168.1.134 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21d:72ff:feec:76d5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7846 errors:0 dropped:0 overruns:0 frame:0 TX packets:7213 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8046624 (8.0 MB) TX bytes:1329442 (1.3 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:91 errors:0 dropped:0 overruns:0 frame:0 TX packets:91 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34497 (34.4 KB) TX bytes:34497 (34.4 KB) Ive included by dmesg output below [ 0.428818] NET: Registered protocol family 2 [ 0.429003] IP route cache hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.430562] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [ 0.436614] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.437409] TCP: Hash tables configured (established 524288 bind 65536) [ 0.437412] TCP reno registered [ 0.437431] UDP hash table entries: 2048 (order: 4, 65536 bytes) [ 0.437482] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes) [ 0.437678] NET: Registered protocol family 1 [ 0.437705] pci 0000:00:02.0: Boot video device [ 0.437892] PCI: CLS 64 bytes, default 64 [ 0.437916] Simple Boot Flag at 0x57 set to 0x1 [ 0.438294] audit: initializing netlink socket (disabled) [ 0.438309] type=2000 audit(1319243447.432:1): initialized [ 0.440763] Freeing initrd memory: 13416k freed [ 0.468362] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.488192] VFS: Disk quotas dquot_6.5.2 [ 0.488254] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 0.488888] fuse init (API version 7.16) [ 0.488985] msgmni has been set to 5890 [ 0.489381] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.489413] io scheduler noop registered [ 0.489415] io scheduler deadline registered [ 0.489460] io scheduler cfq registered (default) [ 0.489583] pcieport 0000:00:1c.0: setting latency timer to 64 [ 0.489633] pcieport 0000:00:1c.0: irq 40 for MSI/MSI-X [ 0.489699] pcieport 0000:00:1c.1: setting latency timer to 64 [ 0.489741] pcieport 0000:00:1c.1: irq 41 for MSI/MSI-X [ 0.489800] pcieport 0000:00:1c.2: setting latency timer to 64 [ 0.489841] pcieport 0000:00:1c.2: irq 42 for MSI/MSI-X [ 0.489904] pcieport 0000:00:1c.3: setting latency timer to 64 [ 0.489944] pcieport 0000:00:1c.3: irq 43 for MSI/MSI-X [ 0.490006] pcieport 0000:00:1c.4: setting latency timer to 64 [ 0.490047] pcieport 0000:00:1c.4: irq 44 for MSI/MSI-X [ 0.490126] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.490149] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 0.490196] intel_idle: MWAIT substates: 0x1110 [ 0.490198] intel_idle: does not run on family 6 model 15 [ 0.491240] ACPI: Deprecated procfs I/F for AC is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 0.493473] ACPI: AC Adapter [ADP1] (on-line) [ 0.493590] input: Lid Switch as /devices/LNXSYSTM:00/device:00/PNP0C0D:00/input/input0 [ 0.496771] ACPI: Lid Switch [LID0] [ 0.496818] input: Sleep Button as /devices/LNXSYSTM:00/device:00/PNP0C0E:00/input/input1 [ 0.496823] ACPI: Sleep Button [SLPB] [ 0.496865] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 [ 0.496869] ACPI: Power Button [PWRF] [ 0.496900] ACPI: acpi_idle registered with cpuidle [ 0.498719] Monitor-Mwait will be used to enter C-1 state [ 0.498753] Monitor-Mwait will be used to enter C-2 state [ 0.498761] Marking TSC unstable due to TSC halts in idle [ 0.517627] thermal LNXTHERM:00: registered as thermal_zone0 [ 0.517630] ACPI: Thermal Zone [TZS0] (67 C) [ 0.524796] thermal LNXTHERM:01: registered as thermal_zone1 [ 0.524799] ACPI: Thermal Zone [TZS1] (67 C) [ 0.524823] ACPI: Deprecated procfs I/F for battery is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 0.524852] ERST: Table is not found! [ 0.524948] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 0.680991] ACPI: Battery Slot [BAT0] (battery present) [ 0.688567] Linux agpgart interface v0.103 [ 0.688672] agpgart-intel 0000:00:00.0: Intel GM45 Chipset [ 0.688865] agpgart-intel 0000:00:00.0: detected gtt size: 2097152K total, 262144K mappable [ 0.689786] agpgart-intel 0000:00:00.0: detected 65536K stolen memory [ 0.689912] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xd0000000 [ 0.691006] brd: module loaded [ 0.691510] loop: module loaded [ 0.691967] Fixed MDIO Bus: probed [ 0.691990] PPP generic driver version 2.4.2 [ 0.692065] tun: Universal TUN/TAP device driver, 1.6 [ 0.692067] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 0.692146] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.692181] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 20 (level, low) -> IRQ 20 [ 0.692206] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [ 0.692210] ehci_hcd 0000:00:1a.7: EHCI Host Controller [ 0.692255] ehci_hcd 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 0.692289] ehci_hcd 0000:00:1a.7: debug port 1 [ 0.696181] ehci_hcd 0000:00:1a.7: cache line size of 64 is not supported [ 0.696202] ehci_hcd 0000:00:1a.7: irq 20, io mem 0xf8904800 [ 0.712014] ehci_hcd 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 0.712131] hub 1-0:1.0: USB hub found [ 0.712136] hub 1-0:1.0: 6 ports detected [ 0.712230] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.712243] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 0.712247] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 0.712287] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 0.712315] ehci_hcd 0000:00:1d.7: debug port 1 [ 0.716201] ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported [ 0.716216] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xf8904c00 [ 0.732014] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 0.732130] hub 2-0:1.0: USB hub found [ 0.732135] hub 2-0:1.0: 6 ports detected [ 0.732209] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.732223] uhci_hcd: USB Universal Host Controller Interface driver [ 0.732254] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 0.732262] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 0.732265] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 0.732298] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 0.732325] uhci_hcd 0000:00:1a.0: irq 20, io base 0x00001820 [ 0.732441] hub 3-0:1.0: USB hub found [ 0.732445] hub 3-0:1.0: 2 ports detected [ 0.732508] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 20 (level, low) -> IRQ 20 [ 0.732514] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 0.732518] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 0.732553] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 0.732577] uhci_hcd 0000:00:1a.1: irq 20, io base 0x00001840 [ 0.732696] hub 4-0:1.0: USB hub found [ 0.732700] hub 4-0:1.0: 2 ports detected [ 0.732762] uhci_hcd 0000:00:1a.2: PCI INT C -> GSI 20 (level, low) -> IRQ 20 [ 0.732768] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 0.732772] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 0.732805] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 0.732829] uhci_hcd 0000:00:1a.2: irq 20, io base 0x00001860 [ 0.732942] hub 5-0:1.0: USB hub found [ 0.732946] hub 5-0:1.0: 2 ports detected [ 0.733007] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.733014] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.733017] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.733057] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 0.733082] uhci_hcd 0000:00:1d.0: irq 23, io base 0x00001880 [ 0.733202] hub 6-0:1.0: USB hub found [ 0.733206] hub 6-0:1.0: 2 ports detected [ 0.733265] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.733273] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.733276] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.733313] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 0.733351] uhci_hcd 0000:00:1d.1: irq 17, io base 0x000018a0 [ 0.733466] hub 7-0:1.0: USB hub found [ 0.733470] hub 7-0:1.0: 2 ports detected [ 0.733532] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.733539] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.733542] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.733578] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 0.733610] uhci_hcd 0000:00:1d.2: irq 18, io base 0x000018c0 [ 0.733730] hub 8-0:1.0: USB hub found [ 0.733736] hub 8-0:1.0: 2 ports detected [ 0.733843] i8042: PNP: PS/2 Controller [PNP0303:KBD0,PNP0f13:PS2M] at 0x60,0x64 irq 1,12 [ 0.751594] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.751605] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.751732] mousedev: PS/2 mouse device common for all mice [ 0.752670] rtc_cmos 00:08: RTC can wake from S4 [ 0.752770] rtc_cmos 00:08: rtc core: registered rtc_cmos as rtc0 [ 0.752796] rtc0: alarms up to one month, y3k, 242 bytes nvram, hpet irqs [ 0.752907] device-mapper: uevent: version 1.0.3 [ 0.752976] device-mapper: ioctl: 4.20.0-ioctl (2011-02-02) initialised: [email protected] [ 0.753028] cpuidle: using governor ladder [ 0.753093] cpuidle: using governor menu [ 0.753096] EFI Variables Facility v0.08 2004-May-17 [ 0.753361] TCP cubic registered [ 0.753482] NET: Registered protocol family 10 [ 0.753966] NET: Registered protocol family 17 [ 0.753992] Registering the dns_resolver key type [ 0.754113] PM: Hibernation image not present or could not be loaded. [ 0.754131] registered taskstats version 1 [ 0.771553] Magic number: 15:152:507 [ 0.771667] rtc_cmos 00:08: setting system clock to 2011-10-22 00:30:48 UTC (1319243448) [ 0.772238] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 0.772240] EDD information not available. [ 0.774165] Freeing unused kernel memory: 984k freed [ 0.774504] Write protecting the kernel read-only data: 10240k [ 0.774755] Freeing unused kernel memory: 20k freed [ 0.775093] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input3 [ 0.779727] Freeing unused kernel memory: 1400k freed [ 0.801946] udevd[84]: starting version 173 [ 0.880950] sky2: driver version 1.28 [ 0.881046] sky2 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.881096] sky2 0000:02:00.0: setting latency timer to 64 [ 0.881197] sky2 0000:02:00.0: Yukon-2 Extreme chip revision 2 [ 0.881871] sky2 0000:02:00.0: irq 45 for MSI/MSI-X [ 0.896273] sky2 0000:02:00.0: eth0: addr 00:1d:72:ec:76:d5 [ 0.910630] ahci 0000:00:1f.2: version 3.0 [ 0.910647] ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.910710] ahci 0000:00:1f.2: irq 46 for MSI/MSI-X [ 0.910775] ahci: SSS flag set, parallel bus scan disabled [ 0.910812] ahci 0000:00:1f.2: AHCI 0001.0200 32 slots 4 ports 3 Gbps 0x33 impl SATA mode [ 0.910816] ahci 0000:00:1f.2: flags: 64bit ncq sntf stag pm led clo pio slum part ccc ems sxs [ 0.910821] ahci 0000:00:1f.2: setting latency timer to 64 [ 0.941773] scsi0 : ahci [ 0.941954] scsi1 : ahci [ 0.942038] scsi2 : ahci [ 0.942118] scsi3 : ahci [ 0.942196] scsi4 : ahci [ 0.942268] scsi5 : ahci [ 0.942332] ata1: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904100 irq 46 [ 0.942336] ata2: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904180 irq 46 [ 0.942339] ata3: DUMMY [ 0.942340] ata4: DUMMY [ 0.942344] ata5: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904300 irq 46 [ 0.942347] ata6: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904380 irq 46 [ 1.028061] usb 1-5: new high speed USB device number 2 using ehci_hcd [ 1.181775] usbcore: registered new interface driver uas [ 1.260062] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.261126] ata1.00: ATA-8: Hitachi HTS543225L9A300, FBEOC40C, max UDMA/133 [ 1.261129] ata1.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 1.262360] ata1.00: configured for UDMA/133 [ 1.262518] scsi 0:0:0:0: Direct-Access ATA Hitachi HTS54322 FBEO PQ: 0 ANSI: 5 [ 1.262716] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.262762] sd 0:0:0:0: [sda] 488397168 512-byte logical blocks: (250 GB/232 GiB) [ 1.262824] sd 0:0:0:0: [sda] Write Protect is off [ 1.262827] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.262851] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.287277] sda: sda1 sda2 sda3 [ 1.287693] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.580059] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 1.581188] ata2.00: ATAPI: HL-DT-STDVDRAM GT10N, 1.00, max UDMA/100 [ 1.582663] ata2.00: configured for UDMA/100 [ 1.584162] scsi 1:0:0:0: CD-ROM HL-DT-ST DVDRAM GT10N 1.00 PQ: 0 ANSI: 5 [ 1.585821] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray [ 1.585824] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 1.585953] sr 1:0:0:0: Attached scsi CD-ROM sr0 [ 1.586038] sr 1:0:0:0: Attached scsi generic sg1 type 5 [ 1.632061] usb 6-1: new low speed USB device number 2 using uhci_hcd [ 1.908056] ata5: SATA link down (SStatus 0 SControl 300) [ 2.228065] ata6: SATA link down (SStatus 0 SControl 300) [ 2.228955] Initializing USB Mass Storage driver... [ 2.229052] usbcore: registered new interface driver usb-storage [ 2.229054] USB Mass Storage support registered. [ 2.235827] scsi6 : usb-storage 1-5:1.0 [ 2.235987] usbcore: registered new interface driver ums-realtek [ 2.244451] input: B16_b_02 USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb6/6-1/6-1:1.0/input/input4 [ 2.244598] generic-usb 0003:046D:C025.0001: input,hidraw0: USB HID v1.10 Mouse [B16_b_02 USB-PS/2 Optical Mouse] on usb-0000:00:1d.0-1/input0 [ 2.244620] usbcore: registered new interface driver usbhid [ 2.244622] usbhid: USB HID core driver [ 3.091083] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null) [ 3.238275] scsi 6:0:0:0: Direct-Access Generic- Multi-Card 1.00 PQ: 0 ANSI: 0 CCS [ 3.348261] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 3.351897] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 47.138012] udevd[334]: starting version 173 [ 47.177678] lp: driver loaded but no devices found [ 47.197084] wmi: Mapper loaded [ 47.197526] acer_wmi: Acer Laptop ACPI-WMI Extras [ 47.210227] acer_wmi: Brightness must be controlled by generic video driver [ 47.566578] Disabling lock debugging due to kernel taint [ 47.584050] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 47.620666] type=1400 audit(1319239895.347:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=624 comm="apparmor_parser" [ 47.620934] type=1400 audit(1319239895.347:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=624 comm="apparmor_parser" [ 47.621108] type=1400 audit(1319239895.347:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=624 comm="apparmor_parser" [ 47.633056] [drm] Initialized drm 1.1.0 20060810 [ 47.722594] i915 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 47.722602] i915 0000:00:02.0: setting latency timer to 64 [ 47.807152] ndiswrapper (check_nt_hdr:141): kernel is 64-bit, but Windows driver is not 64-bit;bad magic: 010B [ 47.807159] ndiswrapper (load_sys_files:206): couldn't prepare driver 'rt2860' [ 47.807930] ndiswrapper (load_wrap_driver:108): couldn't load driver rt2860; check system log for messages from 'loadndisdriver' [ 47.856250] usbcore: registered new interface driver ndiswrapper [ 47.861772] i915 0000:00:02.0: irq 47 for MSI/MSI-X [ 47.861781] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). [ 47.861783] [drm] Driver supports precise vblank timestamp query. [ 47.861842] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 47.980620] fixme: max PWM is zero. [ 48.286153] fbcon: inteldrmfb (fb0) is primary device [ 48.287033] Console: switching to colour frame buffer device 170x48 [ 48.287062] fb0: inteldrmfb frame buffer device [ 48.287064] drm: registered panic notifier [ 48.333883] acpi device:02: registered as cooling_device2 [ 48.334053] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:00/input/input5 [ 48.334128] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 48.334203] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 48.334644] HDA Intel 0000:00:1b.0: power state changed by ACPI to D0 [ 48.334652] HDA Intel 0000:00:1b.0: power state changed by ACPI to D0 [ 48.334673] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 48.334737] HDA Intel 0000:00:1b.0: irq 48 for MSI/MSI-X [ 48.334772] HDA Intel 0000:00:1b.0: setting latency timer to 64 [ 48.356107] Adding 261116k swap on /host/ubuntu/disks/swap.disk. Priority:-1 extents:1 across:261116k [ 48.380946] hda_codec: ALC268: BIOS auto-probing. [ 48.390242] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input6 [ 48.390365] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [ 48.490870] EXT4-fs (loop0): re-mounted. Opts: errors=remount-ro,user_xattr [ 48.917990] ppdev: user-space parallel port driver [ 48.950729] type=1400 audit(1319239896.675:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=941 comm="apparmor_parser" [ 48.951114] type=1400 audit(1319239896.675:6): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=941 comm="apparmor_parser" [ 48.977706] Synaptics Touchpad, model: 1, fw: 7.2, id: 0x1c0b1, caps: 0xd04733/0xa44000/0xa0000 [ 49.048871] input: SynPS/2 Synaptics TouchPad as /devices/platform/i8042/serio1/input/input8 [ 49.078713] sky2 0000:02:00.0: eth0: enabling interface [ 49.079462] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 50.762266] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 50.762702] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 54.751478] type=1400 audit(1319239902.475:7): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm-guest-session-wrapper" pid=1039 comm="apparmor_parser" [ 54.755907] type=1400 audit(1319239902.479:8): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=1040 comm="apparmor_parser" [ 54.756237] type=1400 audit(1319239902.483:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1040 comm="apparmor_parser" [ 54.756417] type=1400 audit(1319239902.483:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=1040 comm="apparmor_parser" [ 54.764825] type=1400 audit(1319239902.491:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=1041 comm="apparmor_parser" [ 54.768365] type=1400 audit(1319239902.495:12): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince-previewer" pid=1041 comm="apparmor_parser" [ 54.770601] type=1400 audit(1319239902.495:13): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince-thumbnailer" pid=1041 comm="apparmor_parser" [ 54.770729] type=1400 audit(1319239902.495:14): apparmor="STATUS" operation="profile_load" name="/usr/share/gdm/guest-session/Xsession" pid=1038 comm="apparmor_parser" [ 54.775181] type=1400 audit(1319239902.499:15): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1043 comm="apparmor_parser" [ 54.775533] type=1400 audit(1319239902.499:16): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1043 comm="apparmor_parser" [ 54.936691] init: failsafe main process (891) killed by TERM signal [ 54.944583] init: apport pre-start process (1096) terminated with status 1 [ 55.000373] init: apport post-stop process (1160) terminated with status 1 [ 55.005291] init: gdm main process (1159) killed by TERM signal [ 59.782579] EXT4-fs (loop0): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [ 60.992021] eth0: no IPv6 routers present [ 61.936072] device eth0 entered promiscuous mode [ 62.053949] Bluetooth: Core ver 2.16 [ 62.054005] NET: Registered protocol family 31 [ 62.054007] Bluetooth: HCI device and connection manager initialized [ 62.054010] Bluetooth: HCI socket layer initialized [ 62.054012] Bluetooth: L2CAP socket layer initialized [ 62.054993] Bluetooth: SCO socket layer initialized [ 62.058750] Bluetooth: RFCOMM TTY layer initialized [ 62.058758] Bluetooth: RFCOMM socket layer initialized [ 62.058760] Bluetooth: RFCOMM ver 1.11 [ 62.059428] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 62.059432] Bluetooth: BNEP filters: protocol multicast [ 62.460389] init: plymouth-stop pre-start process (1662) terminated with status 1 '

    Read the article

  • ActiveMQ - "Cannot send, channel has already failed" every 2 seconds?

    - by quanta
    ActiveMQ 5.7.0 In the activemq.log, I'm seeing this exception every 2 seconds: 2013-11-05 13:00:52,374 | DEBUG | Transport Connection to: tcp://127.0.0.1:37501 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Due to this keyword InactivityIOException, the first thing comes to my mind is InactivityMonitor, but the strange thing is MaxInactivityDuration=30000: 2013-11-05 13:11:02,672 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-2 Moreover, I also didn't see something like this: No message received since last read check for ... or: Channel was inactive for too (30000) long Do a netstat, I see these connections in TIME_WAIT state: tcp 0 0 127.0.0.1:38545 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38544 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38522 127.0.0.1:61616 TIME_WAIT - Here're the output when running tcpdump: Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1) Version: 4 Header length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) 0000 00.. = Differentiated Services Codepoint: Default (0x00) .... ..00 = Explicit Congestion Notification: Not-ECT (Not ECN-Capable Transport) (0x00) Total Length: 296 Identification: 0x7b6a (31594) Flags: 0x02 (Don't Fragment) 0... .... = Reserved bit: Not set .1.. .... = Don't fragment: Set ..0. .... = More fragments: Not set Fragment offset: 0 Time to live: 64 Protocol: TCP (6) Header checksum: 0xc063 [correct] [Good: True] [Bad: False] Source: 127.0.0.1 (127.0.0.1) Destination: 127.0.0.1 (127.0.0.1) Transmission Control Protocol, Src Port: 61616 (61616), Dst Port: 54669 (54669), Seq: 1, Ack: 2, Len: 244 Source port: 61616 (61616) Destination port: 54669 (54669) [Stream index: 11] Sequence number: 1 (relative sequence number) [Next sequence number: 245 (relative sequence number)] Acknowledgement number: 2 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) 000. .... .... = Reserved: Not set ...0 .... .... = Nonce: Not set .... 0... .... = Congestion Window Reduced (CWR): Not set .... .0.. .... = ECN-Echo: Not set .... ..0. .... = Urgent: Not set .... ...1 .... = Acknowledgement: Set .... .... 1... = Push: Set .... .... .0.. = Reset: Not set .... .... ..0. = Syn: Not set .... .... ...0 = Fin: Not set Window size value: 256 [Calculated window size: 32768] [Window size scaling factor: 128] Checksum: 0xff1c [validation disabled] [Good Checksum: False] [Bad Checksum: False] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2304161892, TSecr 2304161891 Kind: Timestamp (8) Length: 10 Timestamp value: 2304161892 Timestamp echo reply: 2304161891 [SEQ/ACK analysis] [Bytes in flight: 244] Constrained Application Protocol, TID: 240, Length: 244 00.. .... = Version: 0 ..00 .... = Type: Confirmable (0) .... 0000 = Option Count: 0 Code: Unknown (0) Transaction ID: 240 Payload Content-Type: text/plain (default), Length: 240, offset: 4 Line-based text data: text/plain [truncated] \001ActiveMQ\000\000\000\t\001\000\000\000<DE>\000\000\000\t\000\fMaxFrameSize\006\177<FF><FF><FF><FF> <FF><FF><FF>\000\tCacheSize\005\000\000\004\000\000\fCacheEnabled\001\001\000\022SizePrefixDisabled\001\000\000 MaxInactivityDurationInitalDelay\006\ It is very likely a tcp port check. This is what I see when trying telnet from another host: 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: java.io.EOFException | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: tcp:///10.8.20.9:46775@61616 java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:275) at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:229) at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:221) at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:204) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Unregistering MBean org.apache.activemq:BrokerName=localhost,Type=Connection,ConnectorName=ope nwire,ViewType=address,Name=tcp_//10.8.20.9_46775 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:/ //10.8.20.9:46775@61616 2013-11-05 16:12:41,073 | DEBUG | Stopping connection: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Stopping transport tcp:///10.8.20.9:46775@61616 | org.apache.activemq.transport.tcp.TcpTranspo rt | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Initialized TaskRunnerFactory[ActiveMQ Task] using ExecutorService: java.util.concurrent.Threa dPoolExecutor@23cc2a28 | org.apache.activemq.thread.TaskRunnerFactory | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Closed socket Socket[addr=/10.8.20.9,port=46775,localport=61616] | org.apache.activemq.transpo rt.tcp.TcpTransport | ActiveMQ Task-1 2013-11-05 16:12:41,074 | DEBUG | Forcing shutdown of ExecutorService: java.util.concurrent.ThreadPoolExecutor@23cc2a28 | org.apache.activemq.util.ThreadPoolUtils | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Stopped transport: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Connection Stopped: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,902 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-5 So the question is: how can I find out the process that is trying to connect to my ActiveMQ (from localhost) every 2 seconds?

    Read the article

  • Handshake violation when trying to access one website

    - by Miguel
    I have a TZ 190 Wireless Enhanced with SonicOS Enhanced 4.2.1.0-20e. Yesterday, people could access without any problems a bank website wich uses HTTPS. Today, it is imposible to access only that website, every other ones works without problems. When checking the log message filtering to my IP only, this is what appears and I suspect is the cause of this problem, because all other websites are working: Priority: Notice Category: Network Access Message: TCP handshake violation detected; TCP connection dropped Source: X.Y.Z.3, 51997, LAN (admin) Destination: 200.14.232.18, 443, WAN Notes: Handshake Timeout Where X.Y.Z.3 is my local IP. I've tried to change TCP Settings under Firewall option, and activated this options with no success: Enforce strict TCP compliance with RFC 793 and RFC 1122 and Enable TCP checksum enforcement I've also tried to find the MTU and at first I got: Packet needs to be fragmented but DF set But when I lower the value of ping -f -l to 1468 I got: Request timeout. Also I deactivate CFS in lan and wan zones. Nothing works. Can you please help me? Any Ideas?

    Read the article

  • JAMES Mailet development process

    - by itsadok
    I'm starting a project that involves writing mailets for Apache James. As far as I can tell, the only way to test a change in my code (on Windows) is through the following steps: Compile the mailet code Build a jar file containing the mailet Copy the jar file into the apps/james/SAR-INF/lib directory Start JAMES from run.bat Run test Stop JAMES by telneting to port 4555 and issuing a shutdown command (I guess on Linux a SIGTERM would suffice) I can automate all these steps using Ant and some scripting magic, but I was wondering if I was missing something. Does anyone here have experience developing mailets? Did you use a similar process, or is there an easier way? For example, is there a way to make a running James instance reload the mailets JAR?

    Read the article

  • Why model => model.Reason_ID turns to model =>Convert(model.Reason_ID)

    - by er-v
    I have my own html helper extension, wich I use this way <%=Html.LocalizableLabelFor(model => model.Reason_ID, Register.PurchaseReason) %> which declared like this. public static MvcHtmlString LocalizableLabelFor<T>(this HtmlHelper<T> helper, Expression<Func<T, object>> expr, string captionValue) where T : class { return helper.LocalizableLabelFor(ExpressionHelper.GetExpressionText(expr), captionValue); } but when I open it in debugger expr.Body.ToString() will show me Convert(model.Reason_ID). But should model.Reason_ID. That's a big problem, becouse ExpressionHelper.GetExpressionText(expr) returns empty string. What a strange magic is that? How can I get rid of it?

    Read the article

  • extreme slowness with a remote database in Drupal

    - by ceejayoz
    We're attempting to scale our Drupal installations up and have decided on some dedicated MySQL boxes. Unfortunately, we're running into extreme slowness when we attempt to use the remote DB - page load times go from ~200 milliseconds to 5-10 seconds. Latency between the servers is minimal - a tenth or two of a millisecond. PING 10.37.66.175 (10.37.66.175) 56(84) bytes of data. 64 bytes from 10.37.66.175: icmp_seq=1 ttl=64 time=0.145 ms 64 bytes from 10.37.66.175: icmp_seq=2 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=3 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=4 ttl=64 time=0.144 ms 64 bytes from 10.37.66.175: icmp_seq=5 ttl=64 time=0.121 ms 64 bytes from 10.37.66.175: icmp_seq=6 ttl=64 time=0.122 ms 64 bytes from 10.37.66.175: icmp_seq=7 ttl=64 time=0.163 ms 64 bytes from 10.37.66.175: icmp_seq=8 ttl=64 time=0.115 ms 64 bytes from 10.37.66.175: icmp_seq=9 ttl=64 time=0.484 ms 64 bytes from 10.37.66.175: icmp_seq=10 ttl=64 time=0.156 ms --- 10.37.66.175 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 8998ms rtt min/avg/max/mdev = 0.115/0.176/0.484/0.104 ms Drupal's devel.module timers show the database queries aren't running any slower on the remote DB - about 150 microseconds whether it's the local or the remote server. Profiling with XHProf shows PHP execution times that aren't out of whack, either. Number of queries doesn't seem to make a difference - we seem the same 5-10 second delay whether a page has 12 queries or 250. Any suggestions about where I should start troubleshooting here? I'm quite confused.

    Read the article

  • Spring 3 Security Authentication Success Handler

    - by Eqbal
    I am using form-login for security and I am trying to implement an authentication success handler, but I am not sure how to go back to the resource that was initially requested before the login process. By default I think it implements a SimpleUrlAuthenticationSuccessHandler and I tried to mirror that class implementation. But it sets a setDefaultTargetUrl(defaultTargetUrl) and perhaps thats where the magic happens that it remembers the resource to go back to after the login process. Any help is greatly appreciated. Below is my spring security <form-login/> element <form-login login-page="/login.jsp" login-processing-url="/b2broe_login" authentication-success-handler-ref="passwordExpiredHandler" authentication-failure-url="/login.jsp?loginfailed=true" />

    Read the article

  • Can I configure Wndows NDES server to use Triple DES (3DES) algorithm for PKCS#7 answer encryption?

    - by O.Shevchenko
    I am running SCEP client to enroll certificates on NDES server. If OpenSSL is not in FIPS mode - everything works fine. In FIPS mode i get the following error: pkcs7_unwrap():pkcs7.c:708] error decrypting inner PKCS#7 139968442623728:error:060A60A3:digital envelope routines:FIPS_CIPHERINIT:disabled for fips:fips_enc.c:142: 139968442623728:error:21072077:PKCS7 routines:PKCS7_decrypt:decrypt error:pk7_smime.c:557: That's because NDES server uses DES algorithm to encrypt returned PKCS#7 packet. I used the following debug code: /* Copy enveloped data from PKCS#7 */ bytes = BIO_read(pkcs7bio, buffer, sizeof(buffer)); BIO_write(outbio, buffer, bytes); p7enc = d2i_PKCS7_bio(outbio, NULL); /* Get encryption PKCS#7 algorithm */ enc_alg=p7enc->d.enveloped->enc_data->algorithm; evp_cipher=EVP_get_cipherbyobj(enc_alg->algorithm); printf("evp_cipher->nid = %d\n", evp_cipher->nid); The last string always prints: evp_cipher-nid = 31 defined in openssl-1.0.1c/include/openssl/objects.h #define SN_des_cbc "DES-CBC" #define LN_des_cbc "des-cbc" #define NID_des_cbc 31 I use 3DES algorithm for PKCS7 requests encryption in my code (pscep.enc_alg = (EVP_CIPHER *)EVP_des_ede3_cbc()) and NDES server accepts these requests, but it always returns answer encrypted with DES. Can I configure Wndows NDES server to use Triple DES (3DES) algorithm for PKCS#7 answer encryption?

    Read the article

  • Active RDP session over VPN getting disconnected

    - by Wandering Penguin
    I am having seemingly random disconnects of active RDP sessions (I am actively typing or otherwise interacting with the desktop) when connected over the VPN connection. The attempted to reconnect 1/20 pops up and proceeds all the way through 20 then drops. Once the session drops I can open a new session and connect again. This started happening about a week ago, The VPN connection is an IPSec VPN connection from a SonicWall NSA 2400. The NIC drivers are up to date. The VPN client is up to date. The firmware on the SonicWall is up to date (both regular and the early-release versions work the same). I have attempted to connect over three ISPs all with the same behavior. Two different workstations were used to test the VPN connection. The same behavior occurs when connecting to a domain workstation or server. If I am within the firewall I can connect to the same workstations and servers with the disconnect. The VPN connection has "enable fragmented packet handling" and "ignore DF (don't fragment) bit" set. Is there something I am missing in where I am looking for the problem?

    Read the article

  • Why allow concatenation of string literals?

    - by Caspin
    I recently got bit by a subtle bug. char ** int2str = { "zero", // 0 "one", // 1 "two" // 2 "three",// 3 nullptr }; assert( values[1] == "one"_s ); // passes assert( values[2] == "two"_s ); // fails If you have godlike code review powers you'll notice I forgot the , after "two". After the considerable effort to find that bug I've got to ask why would anyone ever want this behavior? I can see how this might be useful for macro magic, but then why is this a "feature" in a modern language like python? Have you ever used string literal concatenation in production code?

    Read the article

  • Determing software estimates and tracking past estimates

    - by Casey
    I know that this probably has as many answers as users here on SO, but software estimation always seemed like an esoteric science. Software developers don't have a magic book to refer to as exist in many other industries. I've been spending the last couple of days working on putting together some estimates for a bit of work that I am proposing for a freelance project that I am working on and am having trouble getting it down. I'm not experienced with any real software estimation practices and am trying to go from the gut based on my experience but also trying to be a little loose (not too loose though) on the estimates to leave me a bit of room to work. I read this blog entry http://blogs.popart.com/2007/07/what-scotty-from-star-trek-can-teach-us-about-managing-expectations/ that was linked to from SO and would like to start tracking my estimates at work as well even though I'm not really required to create estimates there. What tools or techniques would you recommend? Also, how much padding do you usually add in to a time estimate?

    Read the article

  • jQuery Thickbox and Google Maps Extinfowindow

    - by cdonner
    I am trying to place a link into an Extinfowindow that obtains its content through an Ajax call. So, I click on a push pin marker, up pops the Extinfowindow with my ThickBox link in it, and when I inspect the DOM for the entire page at that point, I can see the element correctly showing up with the "thickbox" class. The link looks like this <A class="thickbox" title="" href="http://localhost:1293/Popup.aspx? height=200&width=300&modal=true">Modal Popup</A> However, when I click on it, it does a full refresh and the target page loads in the browser, not in a popup. It seems that when the <A> for the Thickbox control is injected into the DOM after the initial load, jQuery is no longer able to do its magic and intercept the anchor link request. Does anybody have thoughts about how to do this better?

    Read the article

  • Relative Resizing of Forms in C#

    - by xarzu
    What is the magic that makes components cling to the edges of a form? I had thought that one must use the resize event of the form and them force each element in the form to resize. But then I saw some sample code which, even when I am editing the form, the elements seem to adhere to a percentage of the space they take up in the form rather than a set diminsion. In other words, when I am editing the form and resizing it, the panels and the parts inside the form bend their shape such that the edges remain a few pixels from the edges. But in my own program I have not been able to find where I can duplicate this feature. When I run my program, this (http://i67.photobucket.com/albums/h292/Athono/microsoft/001.jpg) goes to this (http://i67.photobucket.com/albums/h292/Athono/microsoft/002.jpg)

    Read the article

  • Implement QoS/Bandwidth Management or Upgrade Bandwidth?

    - by Michael
    A question that I'm faced with currently. Here's my setup: Cisco ASA 5510 15Mbps Internet Connection @ $1350/month The bandwidth was originally meant for 35-45 people but we've grown quite quickly to roughly 60-65 people. Needless to say, when I check bandwidth logs it's almost always spiked at 15Mbps. I did use Wireshark to do some poking around to see what was hogging up our bandwidth but with everything running through CDNs and Cloud Services it proved difficult to get a good grasp of where our bandwidth was going. So the question is do I ONLY implement bandwidth management through ASA OR upgrade the Internet to 50Mbps ($1600/month) and then implement bandwidth management through ASA? Any suggestions on how to segment the 15Mbps connection if we decided ONLY to go with the bandwidth management solution? Thanks. UPDATE 1 Installed PRTG and used packet content to monitor the traffic. As I suspected still pretty vague. My Top Connections include the following: a204-2-160-16.deploy.akamaitechnologies.com ec2-50-16-212-159.compute-1.amazonaws.com a204-2-160-48.deploy.akamaitechnologies.com a72-247-247-133.deploy.akamaitechnologies.com mediaserver-sv5-t1-1.pandora.com Other than the Pandora destination, the rest doesn't tell me much on how to properly control the bandwidth. Any thoughts or suggestions? Thanks. M

    Read the article

  • Snort's problems in generating alert from Darpa 1998 intrusion detection dataset.

    - by manofseven2
    Hi. I’m working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don’t generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz -———————————————————————— Command line: snort_2.8.6 c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 -————————————————————————— Snort.config Hi. I'm working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don't generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz Command line: snort_2.8.6 -c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 Snort.config # Setup the network addresses you are protecting var HOME_NET any # Set up the external network addresses. Leave as "any" in most situations var EXTERNAL_NET any # List of DNS servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of ssh servers on your network var SSH_SERVERS $HOME_NET # List of ports you run web servers on portvar HTTP_PORTS [80,1220,2301,3128,7777,7779,8000,8008,8028,8080,8180,8888,9999] # List of ports you want to look for SHELLCODE on. portvar SHELLCODE_PORTS !80 # List of ports you might see oracle attacks on portvar ORACLE_PORTS 1024: # List of ports you want to look for SSH connections on: portvar SSH_PORTS 22 # other variables, these should not be modified var AIM_SERVERS [64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24] var RULE_PATH ../rules var SO_RULE_PATH ../so_rules var PREPROC_RULE_PATH ../preproc_rules # Stop generic decode events: config disable_decode_alerts # Stop Alerts on experimental TCP options config disable_tcpopt_experimental_alerts # Stop Alerts on obsolete TCP options config disable_tcpopt_obsolete_alerts # Stop Alerts on T/TCP alerts config disable_tcpopt_ttcp_alerts # Stop Alerts on all other TCPOption type events: config disable_tcpopt_alerts # Stop Alerts on invalid ip options config disable_ipopt_alerts # Alert if value in length field (IP, TCP, UDP) is greater th elength of the packet # config enable_decode_oversized_alerts # Same as above, but drop packet if in Inline mode (requires enable_decode_oversized_alerts) # config enable_decode_oversized_drops # Configure IP / TCP checksum mode config checksum_mode: all config pcre_match_limit: 1500 config pcre_match_limit_recursion: 1500 # Configure the detection engine See the Snort Manual, Configuring Snort - Includes - Config config detection: search-method ac-split search-optimize max-pattern-len 20 # Configure the event queue. For more information, see README.event_queue config event_queue: max_queue 8 log 3 order_events content_length dynamicpreprocessor directory D:\programs\Snort_2.8.6\snort\lib\snort_dynamicpreprocessor dynamicengine D:\programs\Snort_2.8.6\snort\lib\snort_dynamicengine\sf_engine.dll # path to dynamic rules libraries #dynamicdetection directory /usr/local/lib/snort_dynamicrules preprocessor frag3_global: max_frags 65536 preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180 preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp yes, track_icmp no preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \ overlap_limit 10, small_segments 3 bytes 150, timeout 180, \ ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \ 161 445 513 514 587 593 691 1433 1521 2100 3306 6665 6666 6667 6668 6669 \ 7000 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \ ports both 80 443 465 563 636 989 992 993 994 995 1220 2301 3128 6907 7702 7777 7779 7801 7900 7901 7902 7903 7904 7905 \ 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 8000 8008 8028 8080 8180 8888 9999 preprocessor stream5_udp: timeout 180 preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480 decompress_depth 20480 preprocessor http_inspect_server: server default \ chunk_length 500000 \ server_flow_depth 0 \ client_flow_depth 0 \ post_depth 65495 \ oversize_dir_length 500 \ max_header_length 750 \ max_headers 100 \ ports { 80 1220 2301 3128 7777 7779 8000 8008 8028 8080 8180 8888 9999 } \ non_rfc_char { 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 } \ enable_cookie \ extended_response_inspection \ inspect_gzip \ apache_whitespace no \ ascii no \ bare_byte no \ directory no \ double_decode no \ iis_backslash no \ iis_delimiter no \ iis_unicode no \ multi_slash no \ non_strict \ u_encode yes \ webroot no preprocessor rpc_decode: 111 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 no_alert_multiple_requests no_alert_large_fragments no_alert_incomplete preprocessor bo preprocessor ftp_telnet: global inspection_type stateful encrypted_traffic no preprocessor ftp_telnet_protocol: telnet \ ayt_attack_thresh 20 \ normalize ports { 23 } \ detect_anomalies preprocessor ftp_telnet_protocol: ftp server default \ def_max_param_len 100 \ ports { 21 2100 3535 } \ telnet_cmds yes \ ignore_telnet_erase_cmds yes \ ftp_cmds { ABOR ACCT ADAT ALLO APPE AUTH CCC CDUP } \ ftp_cmds { CEL CLNT CMD CONF CWD DELE ENC EPRT } \ ftp_cmds { EPSV ESTA ESTP FEAT HELP LANG LIST LPRT } \ ftp_cmds { LPSV MACB MAIL MDTM MIC MKD MLSD MLST } \ ftp_cmds { MODE NLST NOOP OPTS PASS PASV PBSZ PORT } \ ftp_cmds { PROT PWD QUIT REIN REST RETR RMD RNFR } \ ftp_cmds { RNTO SDUP SITE SIZE SMNT STAT STOR STOU } \ ftp_cmds { STRU SYST TEST TYPE USER XCUP XCRC XCWD } \ ftp_cmds { XMAS XMD5 XMKD XPWD XRCP XRMD XRSQ XSEM } \ ftp_cmds { XSEN XSHA1 XSHA256 } \ alt_max_param_len 0 { ABOR CCC CDUP ESTA FEAT LPSV NOOP PASV PWD QUIT REIN STOU SYST XCUP XPWD } \ alt_max_param_len 200 { ALLO APPE CMD HELP NLST RETR RNFR STOR STOU XMKD } \ alt_max_param_len 256 { CWD RNTO } \ alt_max_param_len 400 { PORT } \ alt_max_param_len 512 { SIZE } \ chk_str_fmt { ACCT ADAT ALLO APPE AUTH CEL CLNT CMD } \ chk_str_fmt { CONF CWD DELE ENC EPRT EPSV ESTP HELP } \ chk_str_fmt { LANG LIST LPRT MACB MAIL MDTM MIC MKD } \ chk_str_fmt { MLSD MLST MODE NLST OPTS PASS PBSZ PORT } \ chk_str_fmt { PROT REST RETR RMD RNFR RNTO SDUP SITE } \ chk_str_fmt { SIZE SMNT STAT STOR STRU TEST TYPE USER } \ chk_str_fmt { XCRC XCWD XMAS XMD5 XMKD XRCP XRMD XRSQ } \ chk_str_fmt { XSEM XSEN XSHA1 XSHA256 } \ cmd_validity ALLO \ cmd_validity EPSV \ cmd_validity MACB \ cmd_validity MDTM \ cmd_validity MODE \ cmd_validity PORT \ cmd_validity PROT \ cmd_validity STRU \ cmd_validity TYPE preprocessor ftp_telnet_protocol: ftp client default \ max_resp_len 256 \ bounce yes \ ignore_telnet_erase_cmds yes \ telnet_cmds yes preprocessor smtp: ports { 25 465 587 691 } \ inspection_type stateful \ normalize cmds \ normalize_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ max_command_line_len 512 \ max_header_line_len 1000 \ max_response_line_len 512 \ alt_max_command_line_len 260 { MAIL } \ alt_max_command_line_len 300 { RCPT } \ alt_max_command_line_len 500 { HELP HELO ETRN EHLO } \ alt_max_command_line_len 255 { EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET } \ alt_max_command_line_len 246 { SEND SAML SOML AUTH TURN ETRN DATA RSET QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ valid_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ xlink2state { enabled } preprocessor ssh: server_ports { 22 } \ autodetect \ max_client_bytes 19600 \ max_encrypted_packets 20 \ max_server_version_len 100 \ enable_respoverflow enable_ssh1crc32 \ enable_srvoverflow enable_protomismatch preprocessor dcerpc2: memcap 102400, events [co ] preprocessor dcerpc2_server: default, policy WinXP, \ detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \ autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \ smb_max_chain 3 preprocessor dns: ports { 53 } enable_rdata_overflow preprocessor ssl: ports { 443 465 563 636 989 992 993 994 995 7801 7702 7900 7901 7902 7903 7904 7905 7906 6907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 }, trustservers, noinspect_encrypted # SDF sensitive data preprocessor. For more information see README.sensitive_data preprocessor sensitive_data: alert_threshold 25 output alert_full: alert.log output database: log, mysql, user=root password=123456 dbname=snort host=localhost include classification.config include reference.config include $RULE_PATH/local.rules include $RULE_PATH/attack-responses.rules include $RULE_PATH/backdoor.rules include $RULE_PATH/bad-traffic.rules include $RULE_PATH/chat.rules include $RULE_PATH/content-replace.rules include $RULE_PATH/ddos.rules include $RULE_PATH/dns.rules include $RULE_PATH/dos.rules include $RULE_PATH/exploit.rules include $RULE_PATH/finger.rules include $RULE_PATH/ftp.rules include $RULE_PATH/icmp.rules include $RULE_PATH/icmp-info.rules include $RULE_PATH/imap.rules include $RULE_PATH/info.rules include $RULE_PATH/misc.rules include $RULE_PATH/multimedia.rules include $RULE_PATH/mysql.rules include $RULE_PATH/netbios.rules include $RULE_PATH/nntp.rules include $RULE_PATH/oracle.rules include $RULE_PATH/other-ids.rules include $RULE_PATH/p2p.rules include $RULE_PATH/policy.rules include $RULE_PATH/pop2.rules include $RULE_PATH/pop3.rules include $RULE_PATH/rpc.rules include $RULE_PATH/rservices.rules include $RULE_PATH/scada.rules include $RULE_PATH/scan.rules include $RULE_PATH/shellcode.rules include $RULE_PATH/smtp.rules include $RULE_PATH/snmp.rules include $RULE_PATH/specific-threats.rules include $RULE_PATH/spyware-put.rules include $RULE_PATH/sql.rules include $RULE_PATH/telnet.rules include $RULE_PATH/tftp.rules include $RULE_PATH/virus.rules include $RULE_PATH/voip.rules include $RULE_PATH/web-activex.rules include $RULE_PATH/web-attacks.rules include $RULE_PATH/web-cgi.rules include $RULE_PATH/web-client.rules include $RULE_PATH/web-coldfusion.rules include $RULE_PATH/web-frontpage.rules include $RULE_PATH/web-iis.rules include $RULE_PATH/web-misc.rules include $RULE_PATH/web-php.rules include $RULE_PATH/x11.rules include threshold.conf -————————————————————————————- Can anyone help me to solve this problem? Thanks.

    Read the article

  • Relative Resizing of Forms in .NET

    - by xarzu
    What is the magic that makes components cling to the edges of a form? I had thought that one must use the resize event of the form and them force each element in the form to resize. But then I saw some sample code which, even when I am editing the form, the elements seem to adhere to a percentage of the space they take up in the form rather than a set diminsion. In other words, when I am editing the form and resizing it, the panels and the parts inside the form bend their shape such that the edges remain a few pixels from the edges. But in my own program I have not been able to find where I can duplicate this feature. When I run my program, this goes to this

    Read the article

  • Problem with PXE boot

    - by user70523
    Hi, I followed the following link for PXE boot, http://www.howtoforge.com/setting-up-a-pxe-install-server-on-ubuntu-9.10-p3 and I was able to ping the client from the server and also when I booted up the client It is getting the IP address from the server. But later,I got this error PXELinux 3.82 2009-06-09 . . . [other informations] !PXE Entry point found (we hope) at 9D3B:0109 via plan A UNDI code segment at 9D3B len 16C2 UNDI data segment at 933B len A000 Getting cached packet 01 02 03 . . . [other informations] TFTP prefix: Trying to load: pxelinux.cfg/ec5db4c0-74fe-d511-b9e7-3d9235afe5a1 Trying to load: pxelinux.cfg/01-00-17-31-b6-5e-a8 Trying to load: pxelinux.cfg/0A64491E Trying to load: pxelinux.cfg/0A64491 Trying to load: pxelinux.cfg/0A6449 Trying to load: pxelinux.cfg/0A644 Trying to load: pxelinux.cfg/0A64 Trying to load: pxelinux.cfg/0A6 Trying to load: pxelinux.cfg/0A Trying to load: pxelinux.cfg/0 Trying to load: pxelinux.cfg/default Unable to locate configuration file Boot failed: press a key to retry or wait for reset I have put all the files mentioned in the link in tftpboot. Can anyone explain what could be the problem. Thanks in advance

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • Show EPS file in IE8 with standards modes set to IE8

    - by runrunraygun
    <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE8"> With this set my EPS files are not rendered in ABCpdf, but set to EmulateIE7 and they work fine. I know EPS aren't a standard web format but they are embedded into a PDF using a bit of ABCpdf magic. Because IE8 isn't even trying to show them they are not appearing on the PDF as they previously were. <embed style="abcpdf-tag-visible: true;width:40px;height:40px;" id="C:\myfile.eps" src="myfile.eps">

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >