Search Results

Search found 12058 results on 483 pages for 'abstract syntax tree'.

Page 370/483 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • What does path finding in internet routing do and how is it different from A*?

    - by alan2here
    Note: If you don't understand this question then feel free to ask clarification in the comments instead of voting down, it might be that this question needs some more work at the moment. I've been directed here from the Stack Excange chat room Root Access because my question didn't fit on Super User. In many aspects path finding algorithms like A star are very similar to internet routing. For example: A node in an A* path finding system can search for a path though edges between other nodes. A router that's part of the internet can search for a route though cables between other routers. In the case of A*, open and closed lists are kept by the system as a whole, sepratly from any individual node as well as each node being able to temporarily store a state involving several numbers. Routers on the internet seem to have remarkable properties, as I understand it: They are very performant. New nodes can be added at any time that use a free address from a finite (not tree like) address space. It's real routing, like A*, there's never any doubling back for example. Similar IP addresses don't have to be geographically nearby. The network reacts quickly to changes to the networks shape, for example if a line is down. Routers share information and it takes time for new IP's to be registered everywhere, but presumably every router doesn't have to store a list of all the addresses each of it's directions leads most directly to. I'm looking for a basic, general, high level description of the algorithms workings from the point of view of an individual router. Does anyone have one? I presume public internet routers don't use A* as the overheads would be to large, and scale to poorly. I also presume there is a single method worldwide because it seems as if must involve a lot of transferring data to update and communicate a reasonable amount of state between neighboring routers. For example, perhaps the amount of data that needs to be stored in each router scales logarithmically with the number of routers that exist worldwide, the detail and reliability of the routing is reduced over increasing distances, there is increasing backtracking involved in parts of the network that are less geographically uniform or maybe each router really does perform an A* style search, temporarily maintaining open and closed lists when a packet arrives.

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How do you make a bullet ricochet off a vertical wall?

    - by Bagofsheep
    First things first. I am using C# with XNA. My game is top-down and the player can shoot bullets. I've managed to get the bullets to ricochet correctly off horizontal walls. Yet, despite using similar methods (e.g. http://stackoverflow.com/questions/3203952/mirroring-an-angle) and reading other answered questions about this subject I have not been able to get the bullets to ricochet off a vertical wall correctly. Any method I've tried has failed and sometimes made ricocheting off a horizontal wall buggy. Here is the collision code that calls the ricochet method: //Loop through returned tile rectangles from quad tree to test for wall collision. If a collision occurs perform collision logic. for (int r = 0; r < returnObjects.Count; r++) if (Bullets[i].BoundingRectangle.Intersects(returnObjects[r])) Bullets[i].doCollision(returnObjects[r]); Now here is the code for the doCollision method. public void doCollision(Rectangle surface) { if (Ricochet) doRicochet(surface); else Trash = true; } Finally, here is the code for the doRicochet method. public void doRicochet(Rectangle surface) { if (Position.X > surface.Left && Position.X < surface.Right) { //Mirror the bullet's angle. Rotation = -1 * Rotation; //Moves the bullet in the direction of its rotation by given amount. moveFaceDirection(Sprite.Width * BulletScale.X); } else if (Position.Y > surface.Top && Position.Y < surface.Bottom) { } } Since I am only dealing with vertical and horizontal walls at the moment, the if statements simply determine if the object is colliding from the right or left, or from the top or bottom. If the object's X position is within the boundaries of the tile's X boundaries (left and right sides), it must be colliding from the top, and vice verse. As you can see, the else if statement is empty and is where the correct code needs to go.

    Read the article

  • RadioButtons and Lambda Expressions

    - by MightyZot
    Radio buttons operate in groups. They are used to present mutually exclusive lists of options. Since I started programming in Windows 20 years ago, I have always been frustrated about how they are implemented. To make them operate as a group, you put your radio buttons in a group box. Conversely, to group radio buttons in HTML, you simply give them all the same name. Radio buttons with the same name or ID in HTML operate as one mutually exclusive group of options. In C#, all your radio buttons must have unique names and you use group boxes to group them. I’m in the process of converting some old code to C# and I’m tasked with creating a user control with groups of radio buttons on it. I started out writing the traditional switch…case statements to check the appropriate radio button based upon value, loops to uncheck them all, etc. Then it occurred to me that I could stick the radio buttons in a Dictionary or List and use Lambda expressions to make my code a lot more maintainable. So, here is what I ended up with: Here is a dictionary that contains my list of radio buttons and their values. I used their values as the keys, so that I can select them by value. Now, instead of using loops and switch…case statements to control the radio buttons, I use the lambda syntax and extension methods. Selecting a Radio Button by Value This code is inside of a property accessor, so “value” represents the value passed into the property accessor. The “First” extension method uses the delegate represented by the lambda expression to select the radio button (actually KeyValuePair) that represents the passed in value. Finally, the resulting checkbox is checked. Since the radio buttons are in the same group, they function as a group, the appropriate radio button is selected while the others are unselected. Reading the Value This is the get accessor for the property that returns the value of the checked radio button. Now, if you’re using binding, this code is likely not necessary; however, I didn’t want to use binding in this case, so I think this is a good alternative to the traditional loops and switch…case statements.

    Read the article

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • MySQL Workbench 6.2.1 BETA has been released

    - by user12602715
    The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.2. MySQL Workbench 6.2 focuses on support for innovations released in MySQL 5.6 and MySQL 5.7 DMR (Development Release) as well as MySQL Fabric 1.5, with features such as: A new spatial data viewer, allowing graphical views of result sets containing GEOMETRY data and taking advantage of the new GIS capabilities in MySQL 5.7. Support for new MySQL 5.7.4 SQL syntax and configuration options. Metadata Locks View shows the locks connections are blocked or waiting on. MySQL Fabric cluster connectivity - Browsing, view status, and connect to any MySQL instance in a Fabric Cluster. MS Access migration Wizard - easily move to MySQL Databases. Other significant usability improvements were made, aiming to raise productivity for advanced and new users: Direct shortcut buttons to commonly used features in the schema tree. Improved results handling. Columns have better auto-sizing and their widths are saved. Fonts can also be customized. Results "pinned" to persist viewing data. A convenient Run SQL Script command to directly execute SQL scripts, without loading them first. Database Modeling has been updated to allow changes to the formatting of note objects and attached SQL scripts can now be included in forward engineering and synchronization scripts. Integrated Visual Explain within the result set panel. Visual Explain drill down for large to very large explain plans. Shared SQL snippets in the SQL Editor, allowing multiple users to share SQL code by storing it within a MySQL instance. And much more. The list of provided binaries was updated and MySQL Workbench binaries now available for: Windows 7 or newer Mac OS X Lion or newer Ubuntu 12.04 LTS and Ubuntu 14.04 Fedora 20 Oracle Linux 6.5 Oracle Linux 7 Sources for building in other Linux distributions For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-2.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?151 Download MySQL Workbench 6.2.1 now, for Windows, Mac OS X 10.7+, Oracle Linux 6 and 7, Fedora 20, Ubuntu 12.04 and Ubuntu 14.04 or sources, from: http://dev.mysql.com/downloads/tools/workbench/ On behalf of the MySQL Workbench and the MySQL/ORACLE RE Team.

    Read the article

  • High-Powered Sites for low Cost

    - by HighAltitudeCoder
    Ahh, I am experiencing the intimidation of my very first post - visible by the whole world. Ok, here goes.   This first post is nothing exceptional.  It is simply a recommendation based (fittingly, I suppose) upon the job search you may be gearing up for.  I find myself in this very situation right now.  And, I will take my own recommendation after posting this entry. Job-Seekers: To the left you will notice two links under "Recommended Learning".  I have found these links to be invaluable when it comes to re-tooling, re-familiarizing, or otherwise resharping my skills when looking for that next job. Often, you will find job-postings with the text, usually posted after a laborious list of qualifications indicating the company's desire to hire candidates who know what they are doing: "...Looking for a candidate who can hit the ground running...".  The interesting thing about this post to me is I've encountered many individuals who, after speaking and working with them for some time, I've realized are perfectly capable of hitting the ground running - and FAST.  But what if they speed off in the wrong direction? The next time you spearhead a major task in your job, ask yourself: Am I headed in the wrong direction?  There are many ways to do this.  In fact, I've found in this new field there are more tempting ways to steer your project in the wrong direction than there are good ones.  I don't want to suggest that every one of my posts will fall into the "right direction" category, however I do think a healthy dose of introspection of the pros and cons will always be beneficial before you set off. That said, allow me to expound on the previously mentioned links. These web sites are invaluable.  They demonstrate the capabilities of existing as well as new and upcoming tools available in several IDE's.  I've viewed many tutorials in LearnVisualStudio.NET, and only one or two so far in TrainingSpot, however I've been delighted in their simplicity and straightforward approach to proper usage of the particular tool or concept being discussed.  They have not (so far in my experience) demonstrated ways in which to use the tools that become cumbersome, impractical, or error-prone. Each website has step-by-step videos that can be paused, replayed, and most importantly, they are done in real time.  As the author is typing, the viewer gets to experience the coding experience from a first-person perspective, including syntax errors, unexpected behaviors, IDE setup idiosyncracies, everything.  A subtle value I've gained from these videos is that a certain degree of confusion and introspection is normal when working with new tools and exploring new paths.  They (as well as your own experience) are not to be feared, but enjoyed.  I highly recommend them. Good work, guys!

    Read the article

  • Is this a pattern? Should it be?

    - by Arkadiy
    The following is more of a statement than a question - it describes something that may be a pattern. The question is: is this a known pattern? Or, if it's not, should it be? I've had a situation where I had to iterate over two dissimilar multi-layer data structures and copy information from one to the other. Depending on particular use case, I had around eight different kinds of layers, combined in about eight different combinations: A-B-C B-C A-C D-E A-D-E and so on After a few unsuccessful attempts to factor out the repetition of per-layer iteration code, I realized that the key difficulty in this refactoring was the fact that the bottom level needed access to data gathered at higher levels. To explicitly accommodate this requirement, I introduced IterationContext class with a number of get() and set() methods for accumulating the necessary information. In the end, I had the following class structure: class Iterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const = 0; }; class RecursingIterator : public Iterator { RecursingIterator(const Iterator &below); }; class IterateOverA : public RecursingIterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // Iterate over members in dataStructure1 // locate corresponding item in dataStructure2 (passed via context) // and set it in the context // invoke the sub-iterator }; class IterateOverB : public RecursingIterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // iterate over members dataStructure2 (form context) // set dataStructure2's item in the context // locate corresponding item in dataStructure2 (passed via context) // invoke the sub-iterator }; void main() { class FinalCopy : public Iterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // copy data from structure 1 to structure 2 in the context, // using some data from higher levels as needed } } IterationContext ctx(dateStructure2); IterateOverA(IterateOverB(FinalCopy())).iterate(dataStructure1, ctx); } It so happens that dataStructure1 is a uniform data structure, similar to XML DOM in that respect, while dataStructure2 is a legacy data structure made of various structs and arrays. This allows me to pass dataStructure1 outside of the context for convenience. In general, either side of the iteration or both sides may be passed via context, as convenient. The key situation points are: complicated code that needs to be invoked in "layers", with multiple combinations of layer types possible at the bottom layer, the information from top layers needs to be visible. The key implementation points are: use of context class to access the data from all levels of iteration complicated iteration code encapsulated in implementation of pure virtual function two interfaces - one aware of underlying iterator, one not aware of it. use of const & to simplify the usage syntax.

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • "You are missing the following 32-bit libraries, and Steam may not run: libc.so.6" The common fixes don't work,

    - by M_Steam_User
    So I know this is a problem that has been asked around a lot, but I've tried a bunch of solutions with no success. I'm running Ubuntu 12.04 (64 bit), and I just installed it yesterday. This is my first time working with linux. The error is: You are missing the following 32-bit libraries, and Steam may not run: libc.so.6 Things I've tried. First, I had downloaded from the steam website. I uninstalled it, and tried again from the ubuntu software centre. sudo apt-get update sudo apt-get install ia32-libs sudo apt-get upgrade This installed a bunch of the 32 bit libraries, but did not fix the issue. This seems like the major fix for most people. The direct approach of sudo apt-get install libc.so.6 returns this: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libc.so.6 E: Couldn't find any package by regex 'libc.so.6' I guess libc.so.6 isn't a package, just a single file or something? I also tried gksudo gedit /etc/ld.so.conf.d/steam.conf Added these two lines, those the second one was all ready in the file, but copied over: /usr/lib32 /usr/lib/i386-linux-gnu/mesa Then executed: sudo ldconfig But nothing seemed to happen, steam still doesn't work. So, I feel like it is more likely that I have the library and steam isn't looking in the right place. One thing I've seen is people usually reference /usr/local/lib/ for your library locations. However, I can't find where to cd into /usr/, it isn't in my home folder. If /usr/ is the home folder, there is only a /.local folder which only has /share, no lib anywhere. Sorry for my linux ignorance. I appreciate any help, I honestly have no idea how to confirm I have the library and point steam to it, or if that is even the right thing to do. Edit: Tried this, not entirely sure what it means ~$ ls -l /lib32/libc* -rwxr-xr-x 1 root root 1721832 Sep 30 11:06 /lib32/libc-2.15.so -rw-r--r-- 1 root root 185928 Sep 30 11:06 /lib32/libcidn-2.15.so lrwxrwxrwx 1 root root 15 Sep 30 11:06 /lib32/libcidn.so.1 -> libcidn-2.15.so -rw-r--r-- 1 root root 34316 Sep 30 11:06 /lib32/libcrypt-2.15.so lrwxrwxrwx 1 root root 16 Sep 30 11:06 /lib32/libcrypt.so.1 -> libcrypt-2.15.so lrwxrwxrwx 1 root root 12 Sep 30 11:06 /lib32/libc.so.6 -> libc-2.15.so

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • Raid superblock missing on single parition. Recovery needed!

    - by user171639
    Ok so I have a 2 TB raid 1 setup that has three partitions: sdc1: linux sdc2: swap sdc3: LVM for data However the LVM will no longer mount. So I thought that I would take the first drive, mount it in linux (ive done this b4), and reset the spare drive to copy the data. Normally I can mount a single drive for data recovery using: sudo su apt-get install mdadm lvm2 mdadm --assemble --scan modprobe dm-mod vgscan vgchange -ay c mount -o ro /dev/c/c /mnt Unfortunately, vgscan doesnot recognize the data partition. It appears as though the superblock on the first drive's data partition was erased while syncing with the second. So now I cannot mount that partition and the second drive is stuck in spare mode. Any ideas? Or a way to force mount the data partition just to copy the data? knoppix@Microknoppix:~$ sudo su root@Microknoppix:/home/knoppix# apt-get install mdadm lvm2 Reading package lists... Done Building dependency tree Reading state information... Done lvm2 is already the newest version. mdadm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 551 not upgraded. root@Microknoppix:/home/knoppix# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 1 drive (out of 2). mdadm: /dev/md/0 has been started with 1 drive (out of 2). root@Microknoppix:/home/knoppix# modprobe dm-mod root@Microknoppix:/home/knoppix# vgscan Reading all physical volumes. This may take a while... No volume groups found root@Microknoppix:/home/knoppix# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] 4193268 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdc2[2] 524276 blocks super 1.2 [2/1] [U_] unused devices: <none> root@Microknoppix:/home/knoppix# mdadm -v --assemble --auto=yes /dev/md2 /dev/sdc3 mdadm: looking for devices for /dev/md2 mdadm: no recogniseable superblock on /dev/sdc3 mdadm: /dev/sdc3 has no superblock - assembly aborted root@Microknoppix:/home/knoppix# dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.42.4 (12-Jun-2012) Primary superblock at 0, Group descriptors at 1-1 Backup superblock at 32768, Group descriptors at 32769-32769 Backup superblock at 98304, Group descriptors at 98305-98305 Backup superblock at 163840, Group descriptors at 163841-163841 Backup superblock at 229376, Group descriptors at 229377-229377 Backup superblock at 294912, Group descriptors at 294913-294913 Backup superblock at 819200, Group descriptors at 819201-819201 Backup superblock at 884736, Group descriptors at 884737-884737 root@Microknoppix:/home/knoppix# Notes: I can read the super block from the spare drive. I was gonna try and restore the superblock from one of the backups, but i dont know how or if this would work. I also heard creating a new array (mdadm --create) using the same parameters will not delete the data on the drive but i didnt want to risk it. Recommendations?

    Read the article

  • Too complex/too many objects?

    - by Mike Fairhurst
    I know that this will be a difficult question to answer without context, but hopefully there are at least some good guidelines to share on this. The questions are at the bottom if you want to skip the details. Most are about OOP in general. Begin context. I am a jr dev on a PHP application, and in general the devs I work with consider themselves to use many more OO concepts than most PHP devs. Still, in my research on clean code I have read about so many ways of using OO features to make code flexible, powerful, expressive, testable, etc. that is just plain not in use here. The current strongly OO API that I've proposed is being called too complex, even though it is trivial to implement. The problem I'm solving is that our permission checks are done via a message object (my API, they wanted to use arrays of constants) and the message object does not hold the validation object accountable for checking all provided data. Metaphorically, if your perm containing 'allowable' and 'rare but disallowed' is sent into a validator, the validator may not know to look for 'rare but disallowed', but approve 'allowable', which will actually approve the whole perm check. We have like 11 validators, too many to easily track at such minute detail. So I proposed an AtomicPermission class. To fix the previous example, the perm would instead contain two atomic permissions, one wrapping 'allowable' and the other wrapping 'rare but disallowed'. Where previously the validator would say 'the check is OK because it contains allowable,' now it would instead say '"allowable" is ok', at which point the check ends...and the check fails, because 'rare but disallowed' was not specifically okay-ed. The implementation is just 4 trivial objects, and rewriting a 10 line function into a 15 line function. abstract class PermissionAtom { public function allow(); // maybe deny() as well public function wasAllowed(); } class PermissionField extends PermissionAtom { public function getName(); public function getValue(); } class PermissionIdentifier extends PermissionAtom { public function getIdentifier(); } class PermissionAction extends PermissionAtom { public function getType(); } They say that this is 'not going to get us anything important' and it is 'too complex' and 'will be difficult for new developers to pick up.' I respectfully disagree, and there I end my context to begin the broader questions. So the question is about my OOP, are there any guidelines I should know: is this too complicated/too much OOP? Not that I expect to get more than 'it depends, I'd have to see if...' when is OO abstraction too much? when is OO abstraction too little? how can I determine when I am overthinking a problem vs fixing one? how can I determine when I am adding bad code to a bad project? how can I pitch these APIs? I feel the other devs would just rather say 'its too complicated' than ask 'can you explain it?' whenever I suggest a new class.

    Read the article

  • Developing web sites that imitate desktop apps. How to fight that paradigm? [closed]

    - by user1598390
    Supposse there's a company where web sites/apps are designed to resemble desktop apps. They struggle to add: Splash screens Drop-down menus Tab-pages Pages that don't grow downward with content, context is inside scrollable area so page is of a fixed size, as if resembling the one-screen limitation of desktop apps. Modal windows, pop-ups, etc. Tree views Absolutely no access to content unless you login-first, even with non-sensitive content. After splash screen desapears, you are presented with a login screen. No links - just simulated buttons. Fixed page-size. Cannot open a linked in other tab Print button that prints directly ( not showing printable page so the user can't print via the browser's print command ) Progress bars for loading content even when the browser indicates it with its own animation Fonts and color amulate a desktop app made with Visual Basic, PowerBuilder etc. Every app seems almost as if were made in Visual Basic. They reject this elements: Breadcrumbs Good old underlined links Generated/dynamic navigation, usage-based suggestions Ability to open links in multiple tabs Pagination Printable pages Ability to produce a URL you can save or share that links to an item, like when you send someone the link to an especific StackExchange question. The only URL is the main one. Back button To achieve this, tons of javascript code is needed. Lots and lots of Javascript and Ajax code for things not related with the business but with the necessity to hide/show that button, refresh this listbox, grey-out that label, etc. The coplexity generated by forcing one paradigm into another means most lines of code are dedicated to maintain the illusion of a desktop app. What is the best way to change this mindset, and make them embrace the web, and start producing modern, web apps instead of desktop imitations ? EDIT: These sites are intranet sites. Users hate these apps. They constantly whine about them, but they have to use them to do their daily work. These sites are in-house solutions, the end-users have no choice but to use them. They are a "captive audience". Also, substitution will not happen because of high costs. But at least if that mindset is changed, new developments would be more web-like.

    Read the article

  • Working with Windows and Unix

    - by user554629
    Beware of new line characters One of the most frequent issues we encounter in Tech Support is the corruption of files that are transferred between Windows and Unix.   The transfer can occur at any stage, but ultimately involves a transfer of a file using an ftp client that is running on Windows;  it could be ftp or filezilla. Windows uses two characters to mark the end of a line in a text file (CR/LF),carriage return, linefeed.   Unix uses a single character (CR). In all situations, it is best to use binary mode transfer for all files, including ascii text files. Common problems: upload a core file from unix to windows using ftp in ascii mode.The file is going to be larger on Windows than Unix.ftp doesn't know if this is a text file with real line-ends, it takes every ascii CR and transmits two ascii characters CR/LF.The core file, tar file, library ... will be corrupted when transferred to Oracle. download a shell script to Windows, and transfer it to Unix using ftpIf the file is edited on Windows, the unix script line-end chars will be doubled.Unix doesn't know how to handle that, and will likely tell you the script is not executable.Why?  The first line of a shell script ( called "sh-bang" ), identifies the command interpreter the unix shell should use for this script.   Common examples:#/bin/sh#/bin/ksh#/bin/bash#/bin/perl#/bin/sh^M    # will not be understood.#/bin/env ksh # special syntax.  Find ksh and run it dos2unix is a common utility found on most unix platforms, that repairs the issue of Windows LineEnd characters in unix script files.   I've written my own flavor of this utility for use in Tech Support and build environments, that is a bit easier to use, and has some nice side-effects. accepts a list of files:   dos2unix *.sh repairs the file in-place.  Doesn't generate a new file you have to name retains the same timestamp;  it is the encoding that changed, not the file content. Here are the versions of dos2unix for each of the environments we work in.They are compressed with gzip, to avoid the ftp ascii transfer trap,and because I am quite limited in the number of files I can upload to this blog. AIX Linux Solaris sparc  Windows 

    Read the article

  • Best language on Linux to replace manual tasks that use SSH/Telnet? [on hold]

    - by Calab
    I've been tasked to create and maintain a web browser based interface to replace several of the manual tasks that we perform now. I currently have a "shakey" but working program written in Perl (2779 lines) that uses basic Expect coding, but it has some limitations that require a great deal of coding to get around. Because of this I am going to do a complete rewrite and want to do it "right" this time. My question is this... What would be the best language to use to create a web based interface to perform SSH/Telnet tasks that we would normally do manually? Keep in mind the following requirements: Runs on a CentOS Linux system v5.10 Http will be served by Apache2 This is an INTRANET site and only accessible within our organization. User load will be light. No more that 5 users accessing it at one time. perl 5.8.8, php 5.3.3, python 2.7.2 are available... Not sure what other languages to check for, or what modules might be installed in each language. The web interface will need to provide progress indicators and text output produced by the remote connection, in real time as it is generated. If we are running our process on multiple hosts, they should be in individual threads so that they can run side by side, not sequentially. I want the ability to "trap" on specific text generated by the remote host and display an alert to the user - such as when the remote host generates an error message. I would like to avoid as much client side scripting (javascript/vbscript) as I can. Most users will be on Windows PC's using Chrome or IE as a browser. Users will be downloading the resulting output so they can process it as they see fit. I currently have no experience with "Ajax" or the like. Most of my coding experience is old 6809 assembly, Visual Basic 6, and whatever I can cut/paste from online examples in various languages (hence my "shaky" Perl program) My coding environment is Eclipse for remote code editing, but I prefer stuff like UltraEdit if I can get a decent syntax file for the language I'm using. I do have su access on the server, but I'm not the only one using this server so I can't just upgrade/install blindly as I might impact other software currently running on the machine. One reason that I'm asking here, instead of searching (which I did) is that most replies were, "use language 'xyz', but you need to use an external SSH connection" - like I'm using Expect in my Perl script. Most also did not agree on what language that 'xyz' should be. ...so, after this long posting, can someone offer some advice?

    Read the article

  • Help recovering broken OS (permissions issue)

    - by Guandalino
    (At the bottom there is an important update.) I was doing experiments in order to backup a remote account to my local system, Ubuntu 12.04 LTS. I'm not confident with duplicity and probably, due to wrong syntax, some local files have been replaced with remote files. This is just a supposition, I'm not sure this is the real cause of OS corruption. The corruption happened after experimenting with backups, so I think I did something wrong at this regard. I was aware there was a problem when I tried to access a command using sudo: $ sudo ls sudo: unable to open /etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting sudo: unable to initialize policy plugin This is how /etc/sudoers looks like: $ ls -ald /etc/sudoers -r--r----- 1 root root 788 Oct 2 18:30 /etc/sudoers At this point I tried to reboot and now this is the message I get: The system is running in low graphics mode. Your screen, graphics card and input device settings could not be detected correctly. You will need to configure these yourself. I tried to follow the wizard to configure these settings, but without luck (the system prevents me going on when I press "Next"). The thing that makes me a bit less worried is that all the data on the disk seems readable and I'm able to access them using a live cd. I run memtest and RAM seems to be OK. Do you have any idea about how to recover my system? I'm very glad to provide further information, just let me know what info could be helpful. UPDATE. The issue is about wrong permissions and this is how I discovered: I mounted the root partition of the broken OS on /mnt/broken/ (live CD) and did ls /mnt/broken/. I got a permission denied error, while I expected to have the directory listing. I had to do sudo ls /mnt/broken/ and this worked. Thus without having root permission via sudo it's impossible to access the root of broken os. The current output of ls -ld /mnt/broken/ is: drwxr-x--- 29 1000 812 4096 2012-12-08 21:58 /mnt/broken Any thoughts on how to restore the old (working) set of permissions?

    Read the article

  • [EF + Oracle] Inserting Data (Sequences) (2/2)

    - by JTorrecilla
    Prologue In the previous chapter we have see how to create DB records with EF, now we are going to Some Questions about Oracle.   ORACLE One characteristic from SQL Server that differs from Oracle is “Identity”. To all that has not worked with SQL Server, this property, that applies to Integer Columns, lets indicate that there is auto increment columns, by that way it will be filled automatically, without writing it on the insert statement. In EF with SQL Server, the properties whose match with Identity columns, will be filled after invoking SaveChanges method. In Oracle DB, there is no Identity Property, but there is something similar. Sequences Sequences are DB objects, that allow to create auto increment, but there are not related directly to a Table. The syntax is as follows: name, min value, max value and begin value. 1: CREATE SEQUENCE nombre_secuencia 2: INCREMENT BY numero_incremento 3: START WITH numero_por_el_que_empezara 4: MAXVALUE valor_maximo | NOMAXVALUE 5: MINVALUE valor_minimo | NOMINVALUE 6: CYCLE | NOCYCLE 7: ORDER | NOORDER 8:    How to get sequence value? To obtain the next val from the sequence: 1: SELECT nb_secuencia.Nextval 2: From Dual Due to there is no direct way to indicate that a column is related to a sequence, there is several ways to imitate the behavior: Use a Trigger (DB), Use Stored Procedures or Functions(…) or my particularly option. EF model, only, imports Table Objects, Stored Procedures or Functions, but not sequences. By that, I decide to create my own extension Method to invoke Next Val from a sequence: 1: public static class EFSequence 2: { 3: public static int GetNextValue(this ObjectContext contexto, string SequenceName) 4: { 5: string Connection = ConfigurationManager.ConnectionStrings["JTorrecillaEntities2"].ConnectionString; 6: Connection=Connection.Substring(Connection.IndexOf(@"connection string=")+19); 7: Connection = Connection.Remove(Connection.Length - 1, 1); 8: using (IDbConnection con = new Oracle.DataAccess.Client.OracleConnection(Connection)) 9: { 10: using (IDbCommand cmd = con.CreateCommand()) 11: { 12: con.Open(); 13: cmd.CommandText = String.Format("Select {0}.nextval from DUAL", SequenceName); 14: return Convert.ToInt32(cmd.ExecuteScalar()); 15: } 16: } 17:  18: } 19: } This Object Context’s extension method are going to invoke a query with the Sequence indicated by parameter. It takes the connection strings from the App settings, removing the meta data, that was created by VS when generated the EF model. And then, returns the next value from the Sequence. The next value of a Sequence is unique, by that, when some concurrent users are going to create records in the DB using the sequence will not get duplicates. This is my own implementation, I know that it could be several ways to do and better ways. If I find any other way, I promise to post it. To use the example is needed to add a reference to the Oracle (ODP.NET) dll.

    Read the article

  • Javascript Inheritance Part 2

    - by PhubarBaz
    A while back I wrote about Javascript inheritance, trying to figure out the best and easiest way to do it (http://geekswithblogs.net/PhubarBaz/archive/2010/07/08/javascript-inheritance.aspx). That was 2 years ago and I've learned a lot since then. But only recently have I decided to just leave classical inheritance behind and embrace prototypal inheritance. For most of us, we were trained in classical inheritance, using class hierarchies in a typed language. Unfortunately Javascript doesn't follow that model. It is both classless and typeless, which is hard to fathom for someone who's been using classes the last 20 years. For the last two or three years since I've got into Javascript I've been trying to find the best way to force it into the class model without much success. It's clunky and verbose and hard to understand. I think my biggest problem was that it felt so wrong to add or change object members at run time. Every time I did it I felt like I needed a shower. That's the 20 years of classical inheritance in me. Finally I decided to embrace change and do something different. I decided to use the factory pattern to build objects instead of trying to use inheritance. Javascript was made for the factory pattern because of the way you can construct objects at runtime. In the factory pattern you have a factory function that you call and tell it to give you a certain type of object back. The factory function takes care of constructing the object to your specification. Here's an example. Say we want to have some shape objects and they have common attributes like id and area that we want to depend on in other parts of your application. So first thing to do is create a factory object and give it a factory method to create an abstract shape object. The factory method builds the object then returns it. var shapeFactory = { getShape: function(id){ var shape = { id: id, area: function() { throw "Not implemented"; } }; return shape; }}; Now we can add another factory method to get a rectangle. It calls the getShape() method first and then adds an implementation to it. getRectangle: function(id, width, height){ var rect = this.getShape(id); rect.width = width; rect.height = height; rect.area = function() { return this.width * this.height; }; return rect;} That's pretty simple right? No worrying about hooking up prototypes and calling base constructors or any of that crap I used to do. Now let's create a factory method to get a cuboid (rectangular cube). The cuboid object will extend the rectangle object. To get the area we will call into the base object's area method and then multiply that by the depth. getCuboid: function(id, width, height, depth){ var cuboid = this.getRectangle(id, width, height); cuboid.depth = depth; var baseArea = cuboid.area; cuboid.area = function() { var a = baseArea.call(this); return a * this.depth; } return cuboid;} See how we called the area method in the base object? First we save it off in a variable then we implement our own area method and use call() to call the base function. For me this is a lot cleaner and easier than trying to emulate class hierarchies in Javascript.

    Read the article

  • Deleted Myself from Admin Group - Now Getting Error usermod: cannot lock /etc/passwd; try again later

    - by BubbaJ
    I have a laptop with Ubuntu 11.10 that is shared between myself and two other family members. My user id was setup as the only "Administrator" on the laptop. The other users were setup as "Standard" users. In my attempt to try to add myself to the user groups for the other users, I somehow deleted myself from the admin groups. I used the "usermod" command from the terminal. I must have neglected to include the proper switches or syntax for the update. It looks like I successfully added my userid to the group associated with my wife's account. When I use the "groups" command, I can see only my id and my wife's id in the list. I no longer see the "admin" or "adm" groups, and others that used to be listed. When I go into System Settings User Accounts it looks like my ID is now listed as a "Standard" user. I would like to change my account back to "Administrator", but now I can't. I did some searches for solutions and found that I would need to boot into Recovery Mode and execute the usermod command from the root session. I was able to successfully boot into Recovery Mode and get to the root session. I was trying to execute the command "usermod -a -G admin user1" to add my id (user1) back to the admin group. When I execute the command from the root session, I get the error message "usermod: cannot lock /etc/passwd; try again later". I tried preceding the usermod command with "sudo", but it didn't make a difference, same error. I then tried adding a new user using adduser, thinking I would try to create a new userid and make the new userid part of the admin group. I get the same error using the adduser command. I saw some posts that recommend looking for and deleting files that end in ".lock" in the etc directory. The only file I found was .pwd.lock which I haven't touched. I am at a loss as to what to try next. I am relatively inexperienced with Ubuntu and Linux, so alot of this is new to me. Any help you can provide would be much appreciated.

    Read the article

  • Protobuf design patterns

    - by Monster Truck
    I am evaluating Google Protocol Buffers for a Java based service (but am expecting language agnostic patterns). I have two questions: The first is a broad general question: What patterns are we seeing people use? Said patterns being related to class organization (e.g., messages per .proto file, packaging, and distribution) and message definition (e.g., repeated fields vs. repeated encapsulated fields*) etc. There is very little information of this sort on the Google Protobuf Help pages and public blogs while there is a ton of information for established protocols such as XML. I also have specific questions over the following two different patterns: Represent messages in .proto files, package them as a separate jar, and ship it to target consumers of the service --which is basically the default approach I guess. Do the same but also include hand crafted wrappers (not sub-classes!) around each message that implement a contract supporting at least these two methods (T is the wrapper class, V is the message class (using generics but simplified syntax for brevity): public V toProtobufMessage() { V.Builder builder = V.newBuilder(); for (Item item : getItemList()) { builder.addItem(item); } return builder.setAmountPayable(getAmountPayable()). setShippingAddress(getShippingAddress()). build(); } public static T fromProtobufMessage(V message_) { return new T(message_.getShippingAddress(), message_.getItemList(), message_.getAmountPayable()); } One advantage I see with (2) is that I can hide away the complexities introduced by V.newBuilder().addField().build() and add some meaningful methods such as isOpenForTrade() or isAddressInFreeDeliveryZone() etc. in my wrappers. The second advantage I see with (2) is that my clients deal with immutable objects (something I can enforce in the wrapper class). One disadvantage I see with (2) is that I duplicate code and have to sync up my wrapper classes with .proto files. Does anyone have better techniques or further critiques on any of the two approaches? *By encapsulating a repeated field I mean messages such as this one: message ItemList { repeated item = 1; } message CustomerInvoice { required ShippingAddress address = 1; required ItemList = 2; required double amountPayable = 3; } instead of messages such as this one: message CustomerInvoice { required ShippingAddress address = 1; repeated Item item = 2; required double amountPayable = 3; } I like the latter but am happy to hear arguments against it.

    Read the article

  • Can't install Git

    - by davemc
    Im following the tutorial below to install git. https://help.github.com/articles/set-up-git However when I get to the end where I need to install the helper into the same directory where Git itself is installed i get the following error: Davids-iMac:~ davidcavanagh$ which git /usr/bin/git Davids-iMac:~ davidcavanagh$ sudo mv git-credential-osxkeychain /usr/bin mv: rename git-credential-osxkeychain to /usr/bin/git-credential-osxkeychain: No such file or directory Davids-iMac:~ davidcavanagh$ Edit: I am now getting the following error when I install git and then run git -version Davids-iMac:~ davidcavanagh$ git -version /usr/bin/git: line 1: syntax error near unexpected token `newline' /usr/bin/git: line 1: `<?xml version="1.0" encoding="UTF-8"?>' I was following this tutorial guide:https://help.github.com/articles/set-up-git I have also tried using home-brew as well and I get the following error when I do this: Davids-iMac:~ davidcavanagh$ ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)" ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press ENTER to continue or any other key to abort ==> Downloading and Installing Homebrew... Failed during: git init -q Can anyone help? Thanks

    Read the article

  • "file not found" error while commiting

    - by AntonAL
    I have a working copy, checked out from SVN repository. When i'm trying to commit, i get following error: svn: File not found: revision 57, path '/trunk/path/to/my/file/logo-mini.jpg' I've found this file in the repo and noticed, that it has only one revision - 58. I don't understand, why SVN complains about this file, when it is presented and why it points to revision 57 instead of 58 ? I've also renamed the grand-grand-grand-parent folder of this file. Possible, this is an issue ... Update Detailed error description, that i've got from Cornerstone app (Mac OS X): Description : Could not find the specified file. Suggestion : Check that the path you have specified is correct. Technical Information ===================== Error : V4FileNotFoundError Exception : ZSVNNoSuchEntryException Causal Information ================== Description : Commit failed (details follow): Status : 160013 File : subversion/libsvn_client/commit.c, 867 Description : File not found: revision 57, path '/trunk/assets/themes/base/article-content/images/logo-mini.jpg' Status : 160013 File : subversion/libsvn_fs_fs/tree.c, 663 So, i've renamed "/trunk/assets/themes directory" to "/trunk/assets/skins", while improving project structure. I've tried following: updating /trunk/assets/themes directory cleaning deleting from filesytem and checking out again reverting entire /trunk/assets/themes directory to the HEAD revision. Even this does't helps. Still getting the same error. I've got no results.

    Read the article

  • Diagnosing permission problems with Cobian Backup to network share

    - by DaveBurns
    I'm running the latest Cobian 11. I have a Synology DS412 NAS. All of my machines (Mac and Windows) access this just fine when I'm logged in and I browse to it manually. I have Cobian installed as a service on two Windows machines: WinXP SP3 and Win7 x64. On both machines, the service is set to log on with my user account which is in the Windows administrator group. Backups on both machines fail with the message "Couldn't create the destination directory "\nas1\backups\foo\bar\": The filename, directory name, or volume label syntax is incorrect". I have tried setting the NAS's share to allow anonymous read/write access but it made no difference. Although I want the backups to run unattended in the middle of the night, I have tested them by running them manually while I'm logged in but no luck. Before starting that, I make sure that I can browse to the NAS with Explorer to ensure that any authentication session with Windows and the NAS has not expired. Still no luck. I have tried creating that destination directory both on the NAS before the backup and deleting it so the backup job could create it with the client's credentials but no luck. The usual answer in the Cobian support forums is that there is a permission problem. I agree. But at this point, what can I do to diagnose this further?

    Read the article

  • subneting .. is 192.168.0.1/24 different than 192.168.0.1/25

    - by SM
    So i am trying to understand subnetting. I understand that you can take 192.168.0.0/24 domain and break it up in 2 subnets. 192.168.0.0/25 and 192.168.0.128/25 .. so what if i need to create 2 more subnets inside both of those above subnets. What if i need to create 2 subnets, which have 2 more subnets inside them and each of those 2 subnets have 2 hosts. As per my calculation, this would be something like. top 2 subnets: 192.168.0.0/25 and 192.168.0.128/25 subnets of 192.168.0.0/25: 192.168.0.0/26 - 192.168.0.128/26 subnets of 192.168.0.128/25: 192.168.0.128/26 - 192.168.0.192/26 hosts of 192.168.0.0/25: 192.168.0.1/25 - 192.168.0.2/25 hosts of 192.168.0.128/25: 192.168.0.129/25 - 192.168.0.130/25 hosts of 192.168.0.0/26: 192.168.0.1/26 - 192.168.0.2/26 hosts of 192.168.0.128/26: 192.168.0.129/26 - 192.168.0.130/26 hosts of 192.168.0.128/26: 192.168.0.129/26 - 192.168.0.130/26 hosts of 192.168.0.192/26: 192.168.0.193/26 - 192.168.0.194/26 The above doesnt seem right, i am moving down like a tree, however the subnet mask and IPs are being repeated, i am just add 1 to /x and making the ips from there. Can anyone please tell me if this is correct The scenario i am trying to understand is similar to this, however i just want to add hosts on each level as well. http://www.ciscotrick.com/wp-content/uploads/2008/08/the_mathematical_relationship_between_network_block_sizes.jpg

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >