Search Results

Search found 22716 results on 909 pages for 'network architecture'.

Page 218/909 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Given a trace of packets, how would you group them into flows?

    - by zxcvbnm
    I've tried it these ways so far: 1) Make a hash with the source IP/port and destination IP/port as keys. Each position in the hash is a list of packets. The hash is then saved in a file, with each flow separated by some special characters/line. Problem: Not enough memory for large traces. 2) Make a hash with the same key as above, but only keep in memory the file handles. Each packet is then put into the hash[key] that points to the right file. Problems: Too many flows/files (~200k) and it might run out of memory as well. 3) Hash the source IP/port and destination IP/port, then put the info inside a file. The difference between 2 and 3 is that here the files are opened and closed for each operation, so I don't have to worry about running out of memory because I opened too many at the same time. Problems: WAY too slow, same number of files as 2 so also impractical. 4) Make a hash of the source IP/port pairs and then iterate over the whole trace for each flow. Take the packets that are part of that flow and place them into the output file. Problem: Suppose I have a 60 MB trace that has 200k flows. This way, I would process, say, a 60 MB file 200k times. Maybe removing the packets as I iterate would make it not so painful, but so far I'm not sure this would be a good solution. 5) Split them by IP source/destination and then create a single file for each one, separating the flows by special characters. Still too many files (+50k). Right now I'm using Ruby to do it, which might've been a bad idea, I guess. Currently I've filtered the traces with tshark so that they only have relevant info, so I can't really make them any smaller. I thought about loading everything in memory as described in 1) using C#/Java/C++, but I was wondering if there wouldn't be a better approach here, especially since I might also run out of memory later on even with a more efficient language if I have to use larger traces. In summary, the problem I'm facing is that I either have too many files or that I run out of memory. I've also tried searching for some tool to filter the info, but I don't think there is one. The ones I've found only return some statistics and wouldn't scan for every flow as I need.

    Read the article

  • get Generic CRUD operation in EF

    - by kathy
    Hello, Is there any way or design pattern can I use to get Generic CRUD operations? Because I’m working on n-tire application using EF in the data layer and I don’t want to use CRUD Functions in Every Entities. Your help would be appreciated

    Read the article

  • http response to GET request - working in FF not Chromium

    - by Tyler
    For fun I'm trying to write a very simple server in C. When I send this response to Firefox it prints out the body "hello, world" but with Chromium it gives me a Error 100 (net::ERR_CONNECTION_CLOSED): Unknown error. This, I believe, is the relevant code: char *response = "HTTP/1.0 200 OK\r\nVary: Accept-Encoding, Accept-Language\r\nConnection: Close\r\nContent-Type: text/plain\r\nContent-Length:20\r\n\r\nhello, world"; if(send(new_fd, response, strlen(response), 0) == strlen(response)) { printf("sent\n"); }; close(new_fd); What am I missing? Thanks!

    Read the article

  • php, user-uploaded files, version control, and website deployment

    - by user151841
    I have a website that I regularly update the code to. I keep it in version control. When I want to deploy a new version of the site, I do an export and then symlink the served directory name to the directory of the deployment. There is a place where users can upload files, and I noticed once that, after I had deployed a new version, the user files were gone! Of course, I hadn't added them to the repository, and since the served site was from an export, they weren't uploaded into a version-controlled directory anyways. PHP doesn't yet have integrated svn functionality, so I couldn't do much programmatically to user uploaded files. My solution was to create an additional website, files.website.com, which sits in a parallel directory to the served website, and is served out of a directory that is under version control. That way they don't get obliterated when I do an upgrade to the website. From time to time, I manually add uploaded files to the svn project, deleted user-deleted ones, and commit the new version. I'm working on a shell script to run from cron to do this, but it isn't my forte, so it's on the backburner as it's not a pressing need. Is there a better way to do this?

    Read the article

  • JavaEE Application Server or Lightweight Container?

    - by Jeff Storey
    Let me preface this by saying this is not an actual situation of mine but I'm asking this question more for my own knowledge and to get other people's inputs here. I've used both Spring and EJB3/JBoss, and for the smaller types of applications I've built, Spring (+Tomcat when needed) has been much simpler to use. However, when scaling up to larger applications that require things like load balancing and clustering, is Spring still a viable solution? Or is it time to turn to a solution like EJB3/JBoss when you start to get big enough to need that? I'm not sure if I've scoped the problem well enough to get a good answer, so please let me know. Thanks, Jeff

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • How to detect internet connectivity using java program

    - by Sunil Kumar Sahoo
    How to write a java program which will tell me whether I have internet access or not. I donot want to ping or create connection with some external url because if that server will be down then my program will not work. I want reliable way to detect which will tell me 100% guarantee that whether I have internet connection or not irrespective of my Operating System. I want the program for the computers who are directly connected to internet. I have tried with the below program URL url = new URL("http://www.xyz.com/"); URLConnection conn = url.openConnection(); conn.connect(); I want something more appropriate than this program Thanks Sunil Kumar Sahoo

    Read the article

  • Fluent NHibernate SchemaExport to SQLite not pluralizing Table Names

    - by weenet
    I am using SQLite as my db during development, and I want to postpone actually creating a final database until my domains are fully mapped. So I have this in my Global.asax.cs file: private void InitializeNHibernateSession() { Configuration cfg = NHibernateSession.Init( webSessionStorage, new [] { Server.MapPath("~/bin/MyNamespace.Data.dll") }, new AutoPersistenceModelGenerator().Generate(), Server.MapPath("~/NHibernate.config")); if (ConfigurationManager.AppSettings["DbGen"] == "true") { var export = new SchemaExport(cfg); export.Execute(true, true, false, NHibernateSession.Current.Connection, File.CreateText(@"DDL.sql")); } } The AutoPersistenceModelGenerator hooks up the various conventions, including a TableNameConvention like so: public void Apply(FluentNHibernate.Conventions.Instances.IClassInstance instance) { instance.Table(Inflector.Net.Inflector.Pluralize(instance.EntityType.Name)); } This is working nicely execpt that the sqlite db generated does not have pluralized table names. Any idea what I'm missing? Thanks.

    Read the article

  • Java HTTP Client Request with defined timeout

    - by Maxim Veksler
    Hello, I would like to make BIT (Built in tests) to a number of server in my cloud. I need the request to fail on large timeout. How should I do this with java? Trying something like the below does not seem to work. public class TestNodeAliveness { public static NodeStatus nodeBIT(String elasticIP) throws ClientProtocolException, IOException { HttpClient client = new DefaultHttpClient(); client.getParams().setIntParameter("http.connection.timeout", 1); HttpUriRequest request = new HttpGet("http://192.168.20.43"); HttpResponse response = client.execute(request); System.out.println(response.toString()); return null; } public static void main(String[] args) throws ClientProtocolException, IOException { nodeBIT(""); } } -- EDIT: Clarify what library is being used -- I'm using httpclient from apache, here is the relevant pom.xml section org.apache.httpcomponents httpclient 4.0.1 jar

    Read the article

  • How do I dismiss an NSPanel when creating or opening a new document?

    - by mipadi
    I am working on a document-based Cocoa application. At startup, the user is presented with a "welcome panel" (of type NSPanel) with buttons for common actions like "Create New Document" and "Open Existing Document". These actions are linked to the first responder's newDocument: and openDocument: actions, respectively, just like the matching items in the File menu. Everything works as expected...with two caveats: The welcome panel is not dismissed when creating or opening a new document. Document windows do not have focus when they are created. I have partially solved #1 by making my application controller a delegate of the welcome panel. When clicking the "Open Existing Document" button, the panel resigns its key status (since a file browser dialog is being opened), so I can close the panel in the delegate's windowDidResignKey: method. However, I can't figure out how to close the panel when creating a new document, since I can't find a notification that is posted, or a delegate method that is called, when creating a new document. And ultimately, #2 is still a problem, since the document windows don't gain focus when they're created. I have only subclassed NSDocument -- I'm not using a custom document or window controller at all. I've also tried changing the panel to an NSWindow, thinking that an NSWindow may behave differently, but the same problems are occurring.

    Read the article

  • .NET How would I build a DAL to meet my requirments?

    - by Jonno
    Assuming that I must deploy an asp.net app over the following 3 servers: 1) DB - not public 2) 'middle' - not public 3) Web server - public I am not allowed to connect from the web server to the DB directly. I must pass through 'middle' - this is purely to slow down an attacker if they breached the web server. All db access is via stored procedures. No table access. I simply want to provide the web server with a ado dataset (I know many will dislike this, but this is the requirement). Using asmx web services - it works, but XML serialisation is slow and it's an extra set of code to maintain and deploy. Using a ssh/vpn tunnel so that the one connects to the db 'via' the middle server, seems to remove any possible benefit of maintaining 'middle'. Using WCF binary/tcp removes the XML problem, but still there is extra code. Is there an approach that provides the ease of ssh/vpn, but the potential benefit of having the dal on the middle server? Many thanks.

    Read the article

  • What considerations should be made for a web app to be released on a cloud hosted system?

    - by Rhubarb
    I have a web app that is primarily a WordPress app, but it pulls content from a Django app, simply by calling a service that uses Django models. My understanding of cloud computing is a bit vague. If the site needs to scale up with short notice, does the cloud provider (Amazon, Rackspace, whomever) simply spin up new instances (copies) of my initially configured server? How is state managed between all of them? Are there any good primers on this subject? It's hard to find much out there without getting caught up in the marketing.

    Read the article

  • When does an ARM7 processor increase its PC register?

    - by Summer_More_More_Tea
    Hi everyone: I'm thinking about this question for a time: when does an ARM7(with 3 pipelines) processor increase its PC register. I originally thought that after an instruction has been executed, the processor first check is there any exception in the last execution, then increase PC by 2 or 4 depending on current state. If an exception occur, ARM7 will change its running mode, store PC in the LR of current mode and begin to process current exception without modifying the PC register. But it make no sense when analyzing returning instructions. I can not work out why PC will be assigned LR when returning from an undefined-instruction-exception while LR-4 from prefetch-abort-exception, don't both of these exceptions happened at the decoding state? What's more, according to my textbook, PC will always be assigned LR-4 when returning from prefetch-abort-exception no matter what state the processor is(ARM or Thumb) before exception occurs. However, I think PC should be assigned LR-2 if the original state is Thumb, since a Thumb-instruction is 2 bytes long instead of 4 bytes which an ARM-instruction holds, and we just wanna roll-back an instruction in current state. Is there any flaws in my reasoning or something wrong with the textbook. Seems a long question. I really hope anyone can help me get the right answer. Thanks in advance.

    Read the article

  • Separate functionality depending on Role in ASP.NET MVC

    - by Andrew Bullock
    I'm looking for an elegant pattern to solve this problem: I have several user roles in my system, and for many of my controller actions, I need to deal with slightly different data. For example, take /Users/Edit/1 This allows a Moderator to edit a users email address, but Administrators to edit a user's email address and password. I'd like a design for separating the two different bits of action code for the GET and the POST. Solutions I've come up with so far are: Switch inside each method, however this doesn't really help when i want different model arguments on the POST :( Custom controller factory which chooses a UsersController_ForModerators and UsersController_ForAdmins instead of just UsersController from the controller name and current user role Custom action invoker which choose the Edit_ForModerators method in a similar way to above Have an IUsersController and register a different implementation of it in my IoC container as a named instance based on Role Build an implementation of the controller at runtime using Castle DynamicProxy and manipulate the methods to those from role-based implementations Im preferring the named IoC instance route atm as it means all my urls/routing will work seamlessly. Ideas? Suggestions?

    Read the article

  • Monitoring outgoing internet traffic

    - by Frane
    Is there a way to monitoring internet traffic programatically? I would like to log the pages users are visiting on the Internet. Can this be achieved with .NET code, is there a 3rd party .NET component that could be used to retrieved data. Information about Internet traffic must be stored to a database so I cannot use a plugin or something for IE. We are also looking to include this code into our existing product so we cannot use a 3rd party product that cannot be redistributed. It would be cool if this thing could monitor traffic for all browsers but monitoring IE traffic might also be sufficient.

    Read the article

  • Can games be considered real-time systems?

    - by harry
    I've been reading up on real-time systems and how they work etc. I was looking at the wikipedia article as well that said a game of Chess with a timer per move can be considered a real-time system because the program MUST compute a move in that time. What about other games? As we know, games generally try and run at 25+ FPS, could it be considered a soft real-time system since if it falls under 25 (I'm using 25 as a pre-defined threshold btw) it's not the end of the world, just a hit to the performance that we wanted? Also - games have events they must handle as well. The user uses the keyboard/mouse and the system must answer those events accordingly within (again) a pre-defined time, before the game is considered to have "failed". Oh, and I'm talking single-player for now to keep things simple. It sounds like games fit the soft real-time system criteria, but I'd like to know if I'm missing anything... thanks.

    Read the article

  • Distributed Message Ordering

    - by sbanwart
    I have an architectural question on handling message ordering. For purposes of this question, the transport is irrelevant, so I'm not going to specify one. Say we have three systems, a website, a CRM and an ERP. For this example, the ERP will be the "master" system in terms of data ownership. The website and the CRM can both send a new customer message to the ERP system. The ERP system then adds a customer and publishes the customer with the newly assigned account number so that the website and CRM can add the account number to their local customer records. This is a pretty straight forward process. Next we move on to placing orders. The account number is required in order for the CRM or website to place an order with the ERP system. However the CRM will permit the user to place an order even if the customer lacks an account number. (For this example assume we can't modify the CRM behavior) This creates the possibility that a user could create a new customer, and place an order before the account number gets updated in the CRM. What is the best way to handle this scenario? Would it be best to send the order message sans account number and let it go to an error queue? Would it be better to have the CRM endpoint hold the message and wait until the account number is updated in the CRM? Maybe something completely different that I haven't thought of? Thanks in advance for any help.

    Read the article

  • Help with Event-Based Components

    - by Joel in Gö
    I have started to look at Event-Based Components (EBCs), a programming method currently being explored by Ralf Wesphal in Germany, in particular. This is a really interesting and promising way to architect a software solution, and gets close to the age-old idea of being able to stick software components together like Lego :) A good starting point is the Channel 9 video here, and there is a fair bit of discussion in German at the Google Group on EBCs. I am however looking for more concrete examples - while the ideas look great, I am finding it hard to translate them into real code for anything more than a trivial project. Does anyone know of any good code examples (in C# preferably), or any more good sites where EBCs are discussed?

    Read the article

  • How sophisticated should be DAL?

    - by Andrew Florko
    Basically, DAL (Data Access Layer) should provide simple CRUD (Create/Read/Update/Delete) methods but I always have a temptation to create more sophisticated methods in order to minimize database access roundtrips from Business Logic Layer. What do you think about following extensions to CRUD (most of them are OK I suppose): Read: GetById, GetByName, GetPaged, GetByFilter... e.t.c. methods Create: GetOrCreate methods (model entity is returned from DB or created if not found and returned), Create(lots-of-relations) instead of Create and multiple AssignTo methods calls Update: Merge methods (entities list are updated, created and deleted in one call) Delete: Delete(bool children) - optional children delete, Cleanup methods Where do you usually implement Entity Cache capabilities? DAL or BLL? (My choice is BLL, but I have seen DAL implementations also) Where is the boundary when you decide: this operation is too specific so I should implement it in Business Logic Layer as DAL multiple calls? I often found insufficient BLL operations that were implemented in dozen database roundtrips because developer was afraid to create a bit more sophisticated DAL. Thank you in advance!

    Read the article

  • Which is the better C# class design for dealing with read+write versus readonly

    - by DanM
    I'm contemplating two different class designs for handling a situation where some repositories are read-only while others are read-write. (I don't foresee any need to a write-only repository.) Class Design 1 -- provide all functionality in a base class, then expose applicable functionality publicly in sub classes public abstract class RepositoryBase { protected virtual void SelectBase() { // implementation... } protected virtual void InsertBase() { // implementation... } protected virtual void UpdateBase() { // implementation... } protected virtual void DeleteBase() { // implementation... } } public class ReadOnlyRepository : RepositoryBase { public void Select() { SelectBase(); } } public class ReadWriteRepository : RepositoryBase { public void Select() { SelectBase(); } public void Insert() { InsertBase(); } public void Update() { UpdateBase(); } public void Delete() { DeleteBase(); } } Class Design 2 - read-write class inherits from read-only class public class ReadOnlyRepository { public void Select() { // implementation... } } public class ReadWriteRepository : ReadOnlyRepository { public void Insert() { // implementation... } public void Update() { // implementation... } public void Delete() { // implementation... } } Is one of these designs clearly stronger than the other? If so, which one and why? P.S. If this sounds like a homework question, it's not, but feel free to use it as one if you want :)

    Read the article

  • Pros and Cons on where to place business logic: app level or DB

    - by Juri
    Hi, I always again encounter discussions about where to place the business logic: inside a business layer in the application code or down in the DB in terms of stored procedures. Personally I'd tend to the 1st approach, but I'd like to hear some opinions from your part first, without influencing you with my personal views. I know there doesn't exist a one-size-fits-all solution and it often depends on many factors, but we can discuss about that. Btw, we are in the context of web applications and our current approach is to have UI layer which accepts UI input and does a first, client-side validation Business layer with a number of service-classes which contains the business logic including validation for user input (server-side) Data Access Layer which calls stored procedures from the DB for doing persistency/read operations Many people however tend to move the business layer stuff (especially regarding the validation) down to the DB in terms of stored procedures. What do you think about it? I'd like to discuss.

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >