Search Results

Search found 1367 results on 55 pages for 'matthew pk'.

Page 17/55 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Lost with hibernate - OneToMany resulting in the one being pulled back many times..

    - by Andy
    I have this DB design: CREATE TABLE report ( ID MEDIUMINT PRIMARY KEY NOT NULL AUTO_INCREMENT, user MEDIUMINT NOT NULL, created TIMESTAMP NOT NULL, state INT NOT NULL, FOREIGN KEY (user) REFERENCES user(ID) ON UPDATE CASCADE ON DELETE CASCADE ); CREATE TABLE reportProperties ( ID MEDIUMINT NOT NULL, k VARCHAR(128) NOT NULL, v TEXT NOT NULL, PRIMARY KEY( ID, k ), FOREIGN KEY (ID) REFERENCES report(ID) ON UPDATE CASCADE ON DELETE CASCADE ); and this Hibernate Markup: @Table(name="report") @Entity(name="ReportEntity") public class ReportEntity extends Report{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="ID") private Integer ID; @Column(name="user") private Integer user; @Column(name="created") private Timestamp created; @Column(name="state") private Integer state = ReportState.RUNNING.getLevel(); @OneToMany(mappedBy="pk.ID", fetch=FetchType.EAGER) @JoinColumns( @JoinColumn(name="ID", referencedColumnName="ID") ) @MapKey(name="pk.key") private Map<String, ReportPropertyEntity> reportProperties = new HashMap<String, ReportPropertyEntity>(); } and @Table(name="reportProperties") @Entity(name="ReportPropertyEntity") public class ReportPropertyEntity extends ReportProperty{ @Embeddable public static class ReportPropertyEntityPk implements Serializable{ /** * long#serialVersionUID */ private static final long serialVersionUID = 2545373078182672152L; @Column(name="ID") protected int ID; @Column(name="k") protected String key; } @EmbeddedId protected ReportPropertyEntityPk pk = new ReportPropertyEntityPk(); @Column(name="v") protected String value; } And i have inserted on Report and 4 Properties for that report. Now when i execute this: this.findByCriteria( Order.asc("created"), Restrictions.eq("user", user.getObject(UserField.ID)) ) ); I get back the report 4 times, instead of just the once with a Map with the 4 properties in. I'm not great at Hibernate to be honest, prefer straight SQL but I must learn, but i can't see what it is that is wrong.....? Any suggestions?

    Read the article

  • Force sending a user to custom QuerySet.

    - by Jack M.
    I'm trying to secure an application so that users can only see objects which are assigned to them. I've got a custom QuerySet which works for this, but I'm trying to find a way to force the use of this additional functionality. Here is my Model: class Inquiry(models.Model): ts = models.DateTimeField(auto_now_add=True) assigned_to_user = models.ForeignKey(User, blank=True, null=True, related_name="assigned_inquiries") objects = CustomQuerySetManager() class QuerySet(QuerySet): def for_user(self, user): return self.filter(assigned_to_user=user) (The CustomQuerySetManager is documented over here, if it is important.) I'm trying to force everything to use this filtering, so that other methods will raise an exception. For example: Inquiry.objects.all() ## Should raise an exception. Inquiry.objects.filter(pk=69) ## Should raise an exception. Inquiry.objects.for_user(request.user).filter(pk=69) ## Should work. inqs = Inquiry.objects.for_user(request.user) ## Should work. inqs.filter(pk=69) ## Should work. It seems to me that there should be a way to force the security of these objects by allowing only certain users to access them. I am not concerned with how this might impact the admin interface.

    Read the article

  • Using an ORM with a database that has no defined relationships?

    - by Ahmad
    Consider a database(MSSQL 2005) that consists of 100+ tables which have primary keys defined to a certain degree. There are 'relationships' between tables, however these are not enforced with foreign key constraints. Consider the following simplified example of typical types of tables I am dealing with. The are clear relations between the User and City and Province tables. However, they key issues is the inconsistent data types in the tables and naming conventions. User: UserRowId [int] PK Name [varchar(50)] CityId [smallint] ProvinceRowId [bigint] City: CityRowId [bigint] PK CityDescription [varchar(100)] Province: ProvinceId [int] PK ProvinceDesc [varchar(50)] I am considering a rewrite of the application (in ASP.net MVC) that uses this data source as is similar in design to MVC storefront. However I am going through a proof of concept phase and this is one of the stumbling blocks I have come across. What are my options in terms of ORM choice that can be easily used and why? Should I even be considering an ORM? (The reason I ask this is that most explanations and tutorials all work with relatively cleanly designed existing databases, or newly created ones when compared to mine. I am thus having a very hard time trying to find a way forward with this problem) There is a huge amount of existing SQL queries, would a datamappper(eg IBatis.net) be more suitable since we could easily modify them to work and reuse the investment already made? I have found this question on SO which indicates to me that an ORM can be used - however I get the impression that this a question of mapping? Note: at the moment, the object model is not clearly defined as it was non-existent. The existing system pretty much did almost everything in SQL or consisted of overly complicated, and numerous queries to complete fucntionality. I am pretty much a noob and have zero experience around ORMs and MVC - so this an awesome learning curve I am on.

    Read the article

  • Getting the ranking of a photo in SQL

    - by Jake Petroules
    I have the following tables: Photos [ PhotoID, CategoryID, ... ] PK [ PhotoID ] Categories [ CategoryID, ... ] PK [ CategoryID ] Votes [ PhotoID, UserID, ... ] PK [ PhotoID, UserID ] A photo belongs to one category. A category may contain many photos. A user may vote once on any photo. A photo can be voted for by many users. I want to select the ranks of a photo (by counting how many votes it has) both overall and within the scope of the category that photo belongs to. The count of SELECT * FROM Votes WHERE PhotoID = @PhotoID being the number of votes a photo has. I want the resulting table to have generated columns for overall rank, and rank within category, so that I may order the results by either. So for example, the resulting table from the query should look like: PhotoID VoteCount RankOverall RankInCategory 1 48 1 7 3 45 2 5 19 33 3 1 2 17 4 3 7 9 5 5 ... ...you get the idea. How can I achieve this? So far I've got the following query to retrieve the vote counts, but I need to generate the ranks as well: SELECT PhotoID, UserID, CategoryID, DateUploaded, (SELECT COUNT(CommentID) AS Expr1 FROM dbo.Comments WHERE (PhotoID = dbo.Photos.PhotoID)) AS CommentCount, (SELECT COUNT(PhotoID) AS Expr1 FROM dbo.PhotoVotes WHERE (PhotoID = dbo.Photos.PhotoID)) AS VoteCount, Comments FROM dbo.Photos

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Java-Hibernate: How can I translate these tables to hibernate annotations?

    - by penas
    I need to create a simple application using these tables: http://stackoverflow.com/questions/2612848/are-these-tables-respect-the-3nf-database-normalization I have created the application using simple old JDBC, but I would like to see how the application would look like using Hibernate, but I don't know how to put the sql code in java. I have found LOTS of examples, but I'm pretty much confused about using Hibernate and I don't know If I made such a good joob. For example, for the first three tables: AUTHOR table * Author_ID, PK * First_Name * Last_Name TITLES table * TITLE_ID, PK * NAME * Author_ID, FK DOMAIN table * DOMAIN_ID, PK * NAME * TITLE_ID, FK The code in java: Table 1 @Entity @Table(name = "AUTHORS", schema = "LIBRARY") public class Author{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "Author_ID") private int authorId; @Column(name = "First_Name", nullable = false, length = 50) private String firstName; @Column(name = "Last_Name", nullable = false, length = 40) private String lastName; @OneToMany @JoinColumn(name = "Title_ID") private List<Title> titles; Table 2 @Entity @Table(name = "TITLES") public class Title{ @Id @Column(name = "Title_ID") private int titleID; @Column(name = "Name", nullable = false, length = 50) private String name; @ManyToOne @JoinColumn(name = "Domain_ID") private Domain domains; Table 3 @Entity @Table(name = "DOMAINS") public class Domain{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "Domain_ID") private int Domain_ID; @Column(name = "Name", nullable = false, length = 50) private String name; @OneToOne(mappedBy = "domains") private Title title; } Any good? :)

    Read the article

  • Why doesn't the spring @Autowire work with java generics

    - by testing123
    Inspired by spring data awesomeness I wanted to create a abstract RESTController that I could extend for a lot of my controllers. I created the following class: @Controller public abstract class RESTController<E, PK extends Serializable, R extends PagingAndSortingRepository<E, PK>> { @Autowired private R repository; @RequestMapping(method=RequestMethod.GET, params={"id"}) @ResponseBody public E getEntity(@RequestParam PK id) { return repository.findOne(id); } ... } I was hoping that the generics would allow me to @Autowire in the repository but I get the following error: SEVERE: Allocate exception for servlet appServlet org.springframework.beans.factory.NoSuchBeanDefinitionException: No unique bean of type [org.springframework.data.repository.PagingAndSortingRepository] is defined: expected single matching bean but found 3: [groupRepository, externalCourseRepository, managedCourseRepository] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:800) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:707) at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:478) at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87) at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:284) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) I understand what the error is telling me, there is more than one match for the @Autowire. I am confused because I thought by creating the following controller it would work: @Controller @RequestMapping(value="/managedCourse") public class ManagedCourseController extends RESTController<ManagedCourse, Long, ManagedCourseRepository> { ... } This is easy enough to work around by doing having a method like this in the RESTController: protected abstract R getRepository(); and then doing this in your implementing class: @Autowired private ManagedCourseRepository repository; @Override protected ManagedCourseRepository getRepository() { return repository; } I was just wondering if someone had any thoughts of how I could get this to work.

    Read the article

  • WPF & Linq To SQL binding ComboBox to foreign key

    - by ZeroDelta
    I'm having trouble binding a ComboBox to a foreign key in WPF using Linq To SQL. It works fine when displaying records, but if I change the selection on the ComboBox, that change does not seem to affect the property to which it is bound. My SQL Server Compact file has three tables: Players (PK is PlayerID), Events (PK is EventID), and Matches (PK is MatchID). Matches has FKs for the the other two, so that a match is associated with a player and an event. My window for editing a match uses a ComboBox to select the Event, and the ItemsSource is set to the result of a LINQ query to pull all of the Events. And of course the user should be able to select the Event based on EventName, not EventID. Here's the XAML: <ComboBox x:Name="cboEvent" DisplayMemberPath="EventName" SelectedValuePath="EventID" SelectedValue="{Binding Path=EventID, UpdateSourceTrigger=PropertyChanged}" /> And some code-behind from the Loaded event handler: var evt = from ev in db.Events orderby ev.EventName select ev; cboEvent.ItemsSource = evt.ToList(); var mtch = from m in db.Matches where m.PlayerID == ((Player)playerView.CurrentItem).PlayerID select m; matchView = (CollectionView)CollectionViewSource.GetDefaultView(mtch); this.DataContext = matchView; When displaying matches, this works fine--I can navigate from one match to the next and the EventName is shown correctly. However, if I select a new Event via this ComboBox, the CurrentItem of the CollectionView doesn't seem to change. I feel like I'm missing something stupid! Note: the Player is selected via a ListBox, and that selection filters the matches displayed--this seems to be working fine, so I didn't include that code. That is the reason for the "PlayerID" reference in the LINQ query

    Read the article

  • JPA + EJB + JSF: how can design complicated query

    - by Harry Pham
    I am using netbean 6.8 btw. Let say that I have 4 different tables: Company, Facility, Project, and Document. So the relationship is this. A company can have multiple facilities. A facility can have multiple projects, and a project can have multiple documents. Company: +companyNum: PK +facilityNum: FK Facility: +facilityNum: PK +projectNum: FK Project: +projectNum: PK +drawingNum: FK So when I create Entity Class From Database in netbean 6.8, I have 4 entity classes that named after the above 4 tables. So if I want to see all the Document in the database, then it is easy. In my SessionBean, I would do this: @PersistenceContext private EntityManager em; List<Document> documents = em.createNamedQuery("Document.findAll").getResultList(); However, that is not all what I need. Let say that I want to know all the Document from a particular Company, or all the Document from a particular Project from a particular Facility from a particular Company. I am very new to JPA + EJB + JSF as a whole. Please help me out.

    Read the article

  • How to delete a row from a typed dataset and save it to a database?

    - by Zingam
    I am doing this as a small project to learn about disconnected and connected models in .NET 4.0 and SQL Server 2008 R2. I have three tables: Companies (PK CompanyID) Addresses (PK AddressID, FK CompanyID) ContactPersons (PK ContactPersonID, FK CompanyID) CompanyID is assigned manually by the users. The other IDs are auto-generated. Companies has a one-to-many relationship with ContactPerson. I have set any changes to cascade. I display all records in Companies in a DataGridView and when a row is clicked, the corresponding records in ContactPersons are displayed in a second DataGridView. I have successfully implemented updating and inserting new records but I completely fail in my attempts to delete rows and save the changes to the database. I us a typed dataset. If I use this: DataRow[] contactPersonRows = m_SoldaCompaniesFileDataSet.ContactPersons.Select("ContactPersonID = " + this.m_CurrentContactPerson.ContactPersonID); m_SoldaCompaniesFileDataSet.ContactPersons.Rows.Remove(contactPersonRows[0]); The records are displayed properly in the DataGridView but are not saved in the database later. If I use this: DataRow row = m_SoldaCompaniesFileDataSet.ContactPersons.Rows.Find(this.m_CurrentContactPerson.ContactPersonID); row.Delete(); The records are set but I get an exeception: DeletedRowInaccessibleException, when I try to refresh the DataGridView. The exception pop-s up in the auto-generated dataset.design file. I am pretty much stuck at this point since yesterday. I cannot find anything anywhere that remotely resembles my problem. And I cannot understand actually what is going on.

    Read the article

  • Processing a resultset to look up foriegn keys (and poulate a new table!)

    - by Gilly
    Hi, I've been handed a dataset that has some fairly basic table structures with no keys at all. eg {myRubishTable} - Area(varchar),AuthorityName(varchar),StartYear(varchar),StartMonth(varcha),EndYear(varchar),EndMonth(varchar),Amount(Money) there are other tables that use the Area and AuthorityName columns as well as a general use of Month and Years so I I figured a good first step was to pull Area and Authority into their own tables. I now want to process the data in the original table and lookup the key value to put into my new table with foreign keys which looks like this. (lookup Tables) {Area} - id (int, PK), name (varchar(50)) {AuthorityName} - id(int, PK), name(varchar(50) (TargetTable) {myBetterTable} - id (int,PK), area_id(int FK-Area),authority_name_id(int FK-AuthorityName),StartYear (varchar),StartMonth(varchar),EndYear(varchar),EndMonth(varchar),Amount(money) so row one in the old table read MYAREA, MYAUTHORITY,2009,Jan,2010,Feb,10000 and I want to populate the new table with 1,1,1,2009,Jan,2010,Feb,10000 where the first '1' is the primary key and the second two '1's are the ids in the lookup tables. Can anyone point me to the most efficient way of achieving this using just SQL? Thanks in advance Footnote:- I've achieved what I needed with some pretty simple WHERE clauses (I had left a rogue tablename in the FROM which was throwing me :o( ) but would be interested to know if this is the most efficient. ie SELECT [area].[area_id], [authority].[authority_name_id], [myRubishTable].[StartYear], [myRubishTable].[StartMonth], [myRubishTable].[EndYear], [myRubishTable].[EndMonth], [myRubishTable].[Amount] FROM [myRubishTable],[Area],[AuthorityName] WHERE [myRubishTable].[Area]=[Area].[name] AND [myRubishTable].[Authority Name]=[dim_AuthorityName].[name] TIA

    Read the article

  • recursive wget with hotlinked requisites

    - by dongle
    I often use wget to mirror very large websites. Sites that contain hotlinked content (be it images, video, css, js) pose a problem, as I seem unable to specify that I would like wget to grab page requisites that are on other hosts, without having the crawl also follow hyperlinks to other hosts. For example, let's look at this page https://dl.dropbox.com/u/11471672/wget-all-the-things.html Let's pretend that this is a large site that I would like to completely mirror, including all page requisites – including those that are hotlinked. wget -e robots=off -r -l inf -pk ^^ gets everything but the hotlinked image wget -e robots=off -r -l inf -pk -H ^^ gets everything, including hotlinked image, but goes wildly out of control, proceeding to download the entire web wget -e robots=off -r -l inf -pk -H --ignore-tags=a ^^ gets the first page, including both hotlinked and local image, does not follow the hyperlink to the site outside of scope, but obviously also does not follow the hyperlink to the next page of the site. I know that there are various other tools and methods of accomplishing this (HTTrack and Heritrix allow for the user to make a distinction between hotlinked content on other hosts vs hyperlinks to other hosts) but I'd like to see if this is possible with wget. Ideally this would not be done in post-processing, as I would like the external content, requests, and headers to be included in the WARC file I'm outputting.

    Read the article

  • Driver for Ensoniq AudioPCI ES1370 on Windows 7

    - by Matthew Steeples
    KVM is a virtualisation package for running operating systems such as Windows on Linux. Windows XP works fine in this, but Windows 7 fails to recognise the sound card. According to the documentation, the soundcard is a Ensoniq AudioPCI ES1370. Does anyone know where I can find a driver for this, or a compatible driver that will run under 7. I've not tried this in Vista as yet.

    Read the article

  • Failed MDADM Array With Ext.4 Partition - "e2fsck: unable to set superblock flags on /dev/md0"

    - by Matthew Hodgkins
    Had a power failure and now my mdadm array is having problems. sudo mdadm -D /dev/md0 [hodge@hodge-fs ~]$ sudo mdadm -D /dev/md0 /dev/md0: Version : 0.90 Creation Time : Sun Apr 25 01:39:25 2010 Raid Level : raid5 Array Size : 8790815232 (8383.57 GiB 9001.79 GB) Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB) Raid Devices : 7 Total Devices : 7 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sat Aug 7 19:10:28 2010 State : clean, degraded, recovering Active Devices : 6 Working Devices : 7 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 128K Rebuild Status : 10% complete UUID : 44a8f730:b9bea6ea:3a28392c:12b22235 (local to host hodge-fs) Events : 0.1307608 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 8 97 1 active sync /dev/sdg1 2 8 113 2 active sync /dev/sdh1 3 8 65 3 active sync /dev/sde1 4 8 49 4 active sync /dev/sdd1 7 8 33 5 spare rebuilding /dev/sdc1 6 8 16 6 active sync /dev/sdb sudo mount -a [hodge@hodge-fs ~]$ sudo mount -a mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so sudo fsck.ext4 /dev/md0 [hodge@hodge-fs ~]$ sudo fsck.ext4 /dev/md0 e2fsck 1.41.12 (17-May-2010) fsck.ext4: Group descriptors look bad... trying backup blocks... /dev/md0: recovering journal fsck.ext4: unable to set superblock flags on /dev/md0 sudo dumpe2fs /dev/md0 | grep -i superblock [hodge@hodge-fs ~]$ sudo dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.41.12 (17-May-2010) Primary superblock at 0, Group descriptors at 1-524 Backup superblock at 32768, Group descriptors at 32769-33292 Backup superblock at 98304, Group descriptors at 98305-98828 Backup superblock at 163840, Group descriptors at 163841-164364 Backup superblock at 229376, Group descriptors at 229377-229900 Backup superblock at 294912, Group descriptors at 294913-295436 Backup superblock at 819200, Group descriptors at 819201-819724 Backup superblock at 884736, Group descriptors at 884737-885260 Backup superblock at 1605632, Group descriptors at 1605633-1606156 Backup superblock at 2654208, Group descriptors at 2654209-2654732 Backup superblock at 4096000, Group descriptors at 4096001-4096524 Backup superblock at 7962624, Group descriptors at 7962625-7963148 Backup superblock at 11239424, Group descriptors at 11239425-11239948 Backup superblock at 20480000, Group descriptors at 20480001-20480524 Backup superblock at 23887872, Group descriptors at 23887873-23888396 Backup superblock at 71663616, Group descriptors at 71663617-71664140 Backup superblock at 78675968, Group descriptors at 78675969-78676492 Backup superblock at 102400000, Group descriptors at 102400001-102400524 Backup superblock at 214990848, Group descriptors at 214990849-214991372 Backup superblock at 512000000, Group descriptors at 512000001-512000524 Backup superblock at 550731776, Group descriptors at 550731777-550732300 Backup superblock at 644972544, Group descriptors at 644972545-644973068 Backup superblock at 1934917632, Group descriptors at 1934917633-1934918156 sudo e2fsck -b 32768 /dev/md0 [hodge@hodge-fs ~]$ sudo e2fsck -b 32768 /dev/md0 e2fsck 1.41.12 (17-May-2010) /dev/md0: recovering journal e2fsck: unable to set superblock flags on /dev/md0 sudo dmesg | tail [hodge@hodge-fs ~]$ sudo dmesg | tail EXT4-fs (md0): ext4_check_descriptors: Checksum for group 0 failed (59837!=29115) EXT4-fs (md0): group descriptors corrupted! EXT4-fs (md0): ext4_check_descriptors: Checksum for group 0 failed (59837!=29115) EXT4-fs (md0): group descriptors corrupted! Please Help!!!

    Read the article

  • Setup MSSQL replication with peer to peer topology: problem setting up Conflict Detection

    - by Roel
    Hi, I'm setting up a SQL Replication strategy, using MSSQL2008 with peer-to-peer publications (2 servers, each one subscribes to the other). I followed this HOWTO from MSDN, and the setup seems to be working fine: add a record to one table on server A, query on server B shows the new record. So far, so good. So far I only have one table 'Templates': Id PK (calculated field) NodeId int default 1/2 (Server A = 1, Server B = 2) LocalId int autoid Name nvarchar(100) Now, I would like to enable 'Conflict detection', which should be enabled by default. But every time I try to save the 'Conflict Detection' feature in the Publication Properties I get the following error: Cannot save Peer conflict detection properties. An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) Program Location: at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand) at Microsoft.SqlServer.Replication.ReplicationObject.ExecCommand(String commandIn) at Microsoft.SqlServer.Replication.TransPublication.SetPeerConflictDetection(Boolean enablePeerConflictDetection, Int32 peerOriginatorID) at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveP2PConflictDetection() at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveProperties(ExecutionMode& executionResult) Column name 'Id' does not exist in the target table or view. Changed database context to 'TestDB'. (.Net SqlClient Data Provider) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.2531&EvtSrc=MSSQLServer&EvtID=1911&LinkId=20476 Server Name: SERVER_A Error Number: 1911 Severity: 16 State: 1 Line Number: 2 Program Location: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Now, I googled the hell out of this error, and nothing shows up. I also can't seem to find out what the exact target table of the error "Column name 'Id' does not exist..." is. Has anyone every done this successfully? Am I missing something? Having this setup without conflict detection feels pretty useless... EDIT OK, so after some more research and setting up with different databases etc, I found out that the calculated 'Id' column of the Templates table is the culprit. I don't know why, but the replication doesn't seem to allow calculated columns (which are also primary key). It works now too, without the 'Id' column, and using the NodeId and LocalId as a combined PK. So now the question is, why isn't it allowed to have a calculated column as PK for replication with conflict detection?

    Read the article

  • Professional Custom Logo Design vs. Mr. Right

    John is an ex-marine and ex-employee of general motors. He recently lost his job working as a welder on the assembly lines of one of GM manufacturing plants. John has traveled a lot and knows a lot a... [Author: Emily Matthew - Web Design and Development - March 31, 2010]

    Read the article

  • Get IIS 7.5 to listen on IPv6

    - by Matthew Steeples
    I'm using IIS on Windows 7, and I can't get it to bind to the IPv6 equivalent of 0.0.0.0 and 127.0.0.1 ([::] and [::1]) The first one gives me an error when trying to start the service that says The World Wide Web Publishing Service (WWW Service) did not register the URL prefix http://[::]:80/ for site 1. The site has been disabled. The data field contains the error number. The second one doesn't give any errors but doesn't listen on anything apart from 0.0.0.0 The bindings dropdown only has my teredo address (2001::) listed, and not my link local (fe80::) one.

    Read the article

  • Heartbeat won't successfully start up resources from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. The servers are directory connected with a 1000Mbps crossover cable on eth1 and have access to an IP camera LAN on eth0. Now, let's say that one node is down and the remaining functional node is booting after having been shut down. The node that is still functioning won't start up heartbeat and provide access to the drbd resource from a cold boot. I have to manually restart heartbeat by sudo service heartbeat restart to get everything up and running. How can I get it to start fine from a cold start, when only one server is present? Here is the ha.cf: debug /var/log/ha-debug logfile /var/log/ha-log logfacility none keepalive 2 deadtime 10 warntime 7 initdead 60 ucast eth1 192.168.2.2 ucast eth0 10.1.10.201 node EMserver1 node EMserver2 respawn hacluster /usr/lib/heartbeat/ipfail ping 10.1.10.22 10.1.10.21 10.1.10.11 auto_failback off Some material from the syslog: harc[4604]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/status status mach_down[4632]: 2012/11/27_13:54:49 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[4632]: 2012/11/27_13:54:49 info: mach_down takeover complete for node emserver2. Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: Initial resource acquisition complete (T_RESOURCES(us)) Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: mach_down takeover complete. IPaddr[4679]: 2012/11/27_13:54:49 INFO: Resource is stopped Nov 27 13:54:49 EMserver1 heartbeat: [4605]: info: Local Resource acquisition completed. harc[4713]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp ip-request-resp[4713]: 2012/11/27_13:54:49 received ip-request-resp IPaddr::10.1.10.254 OK yes ResourceManager[4732]: 2012/11/27_13:54:50 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[4759]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 start IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated nic for 10.1.10.254: eth0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated netmask for 10.1.10.254: 255.255.255.0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: eval ifconfig eth0:0 10.1.10.254 netmask 255.255.255.0 broadcast 10.1.10.255 IPaddr[4804]: 2012/11/27_13:54:50 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/drbddisk r0 start Filesystem[4965]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 start Filesystem[5039]: 2012/11/27_13:54:50 INFO: Running start for /dev/drbd1 on /shr Filesystem[5033]: 2012/11/27_13:54:51 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:51 info: Running /etc/init.d/nfs-kernel-server start Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: Local Resource acquisition completed. (none) Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: local resource transition completed. Nov 27 13:57:46 EMserver1 heartbeat: [4586]: info: Heartbeat shutdown in progress. (4586) Nov 27 13:57:46 EMserver1 heartbeat: [5286]: info: Giving up all HA resources. ResourceManager[5301]: 2012/11/27_13:57:46 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[5372]: 2012/11/27_13:57:46 INFO: Running stop for /dev/drbd1 on /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: Trying to unmount /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: unmounted /shr successfully Filesystem[5366]: 2012/11/27_13:57:47 INFO: Success ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[5509]: 2012/11/27_13:57:47 INFO: ifconfig eth0:0 down IPaddr[5497]: 2012/11/27_13:57:47 INFO: Success Nov 27 13:57:47 EMserver1 heartbeat: [5286]: info: All HA resources relinquished. Nov 27 13:57:48 EMserver1 heartbeat: [4586]: info: killing /usr/lib/heartbeat/ipfail process group 4603 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBFIFO process 4589 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4590 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4591 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4592 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4593 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4594 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4595 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4596 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4597 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4598 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4599 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4589 exited. 11 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4596 exited. 10 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4598 exited. 9 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4590 exited. 8 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4595 exited. 7 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4591 exited. 6 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4592 exited. 5 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4593 exited. 4 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4597 exited. 3 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4594 exited. 2 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4599 exited. 1 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: emserver1 Heartbeat shutdown complete. Here is some more from the log ResourceManager[2576]: 2012/11/28_16:32:42 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[2602]: 2012/11/28_16:32:42 INFO: Running OK Filesystem[2653]: 2012/11/28_16:32:43 INFO: Running OK Nov 28 16:32:52 EMserver1 heartbeat: [1695]: WARN: node emserver2: is dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Dead node emserver2 gave up resources. Nov 28 16:32:52 EMserver1 ipfail: [1807]: info: Status update: Node emserver2 now has status dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Link emserver2:eth1 dead. Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: NS: We are still alive! Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: Link Status update: Link emserver2/eth1 now has status dead Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Asking other side for ping node count. Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Checking remote count of ping nodes. Nov 28 16:32:57 EMserver1 heartbeat: [1695]: info: Heartbeat shutdown in progress. (1695) Nov 28 16:32:57 EMserver1 heartbeat: [2734]: info: Giving up all HA resources. ResourceManager[2751]: 2012/11/28_16:32:57 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[2829]: 2012/11/28_16:32:57 INFO: Running stop for /dev/drbd1 on /shr Filesystem[2829]: 2012/11/28_16:32:57 INFO: Trying to unmount /shr Filesystem[2829]: 2012/11/28_16:32:58 INFO: unmounted /shr successfully Filesystem[2823]: 2012/11/28_16:32:58 INFO: Success ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[2971]: 2012/11/28_16:32:58 INFO: ifconfig eth0:0 down IPaddr[2958]: 2012/11/28_16:32:58 INFO: Success Nov 28 16:32:58 EMserver1 heartbeat: [2734]: info: All HA resources relinquished. Nov 28 16:32:59 EMserver1 heartbeat: [1695]: info: killing /usr/lib/heartbeat/ipfail process group 1807 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBFIFO process 1777 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1778 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1779 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1780 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1781 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1782 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1783 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1784 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1785 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1786 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1787 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1778 exited. 11 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1779 exited. 10 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1780 exited. 9 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1781 exited. 8 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1782 exited. 7 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1783 exited. 6 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1784 exited. 5 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1785 exited. 4 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1786 exited. 3 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1787 exited. 2 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1777 exited. 1 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: emserver1 Heartbeat shutdown complete. If I restarted heartbeat at this point... the resources heartbeat controls would start up fine.... please help!

    Read the article

  • IPv6: Can't ping anything - "Operation not permitted"

    - by Matthew Iselin
    I've been working on getting IPv6 support into my network, and had everything working properly for a short while. The server is running Ubuntu Server 8.10. Now however whenever I attempt to do anything related to IPv6 on the server, I get "Operation not permitted". This is coming from things like wide-dhcpv6-client (when trying to get an IPv6 address from the ISP) and radvd - both log errors of this type into syslog. Even pinging the loopback interface fails: xxx@gordon:~$ ping6 ::1 PING ::1(::1) 56 data bytes ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- ::1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms xxx@gordon:~$ sudo ping6 ::1 sudo: unable to resolve host gordon PING ::1(::1) 56 data bytes ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- ::1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms As you can see, I have attempted pinging as root, as most of the material I've found on the internet points to a permission problem. However, that has not helped. Any hints to getting unstuck would be appreciated.

    Read the article

  • [Ubuntu 10.04 Desktop] Unable to get Webmin or Vboxdrv to auto start on boot

    - by Matthew Hodgkins
    Ever since installing Ubuntu 10.04 I've had issues getting things to auto-start. I have installed webmin and VirtualBox but every time I reboot I have to manually run: sudo /etc/init.d/webmin start sudo /etc/init.d/vboxdrv start I have run: sudo update-rc.d -f webmin remove and then hodge@hodge-fs:~$ sudo update-rc.d webmin defaults update-rc.d: warning: webmin start runlevel arguments (2 3 4 5) do not match LSB Default-Start values (2 3 5) Adding system startup for /etc/init.d/webmin ... /etc/rc0.d/K20webmin -> ../init.d/webmin /etc/rc1.d/K20webmin -> ../init.d/webmin /etc/rc6.d/K20webmin -> ../init.d/webmin /etc/rc2.d/S20webmin -> ../init.d/webmin /etc/rc3.d/S20webmin -> ../init.d/webmin /etc/rc4.d/S20webmin -> ../init.d/webmin /etc/rc5.d/S20webmin -> ../init.d/webmin But it they still refuse to start on boot. Any ideas?

    Read the article

  • 2.0 speeds on USB hub?

    - by Matthew Robertson
    How capable are USB hubs? I have an AirPort Extreme router with a printer attached (it's not powered by USB). I want to extend this and add two hard drives (one for Time Machine and the other for EyeTV recordings). Can a 4-port USB hub (I'm considering this one) achieve USB 2.0 speeds and power the hard drives? What difference would a self-powered vs externally-powered hub produce?

    Read the article

  • Running SSL locally on a hosts redirected domain name with Ubuntu and Apache

    - by Matthew Brown
    I recently made some changes to my Ubuntu computer so that a domain name resolved to my local copy of Apache. I edited /etc/hosts and added 127.0.0.1 thisbit.example.com Then set up a VirtualHost for the responses I wishes to create. That all works fine and my testing is now shooting on ahead without harm or risk tot he production server. Now for my next trick I need to test the authentication and so need to do this with HTTPS Basically https://auth.example.com needs to work on my PC without the SSL causing an issue which I imagine would be the case as I am clearly not the true https://auth.example.com but for the basis of this exercise I need to pretend that I am. Now it might be that the Apps I'm testing don't worry about checking the certificate. (Many are in Java which I'm no expert with). What gotchas am I likely to encounter and what is the best way of not letting my own hacks spoil my testing? I'm guessing the place to start is to enable SSL with Apcahe... I've never done that before as it has never come up before.

    Read the article

  • Are there netcat-like tools for Windows which are not quarantined as malware?

    - by Matthew Murdoch
    I used to use netcat for Windows to help track down network connectivity issues. However these days my anti-virus software (Symantec - but I understand others display similar behaviour) quarantines netcat.exe as malware. Are there any alternative applications which provide at least the following functionality: can connect to an open TCP socket and send data to it which is typed on the console can open and listen on a TCP socket and print received data to the console ? I don't need the 'advanced' features (which are possibly the reason for the quarantining) such as port scanning or remote execution.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >