Search Results

Search found 6690 results on 268 pages for 'worst practices'.

Page 250/268 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • How to use Node.js to build pages that are a mix between static and dynamic content?

    - by edt
    All pages on my 5 page site should be output using a Node.js server. Most of the page content is static. At the bottom of each page, there is a bit of dynamic content. My node.js code currently looks like: var http = require('http'); http.createServer(function (request, response) { console.log('request starting...'); response.writeHead(200, { 'Content-Type': 'text/html' }); var html = '<!DOCTYPE html><html><head><title>My Title</title></head><body>'; html += 'Some more static content'; html += 'Some more static content'; html += 'Some more static content'; html += 'Some dynamic content'; html += '</body></html>'; response.end(html, 'utf-8'); }).listen(38316); I'm sure there are numerous things wrong about this example. Please enlighten me! For example: How can I add static content to the page without storing it in a string as a variable value with += numerous times? What is the best practices way to build a small site in Node.js where all pages are a mix between static and dynamic content?

    Read the article

  • When to save a mongoose model

    - by kentcdodds
    This is an architectural question. I have models like this: var foo = new mongoose.Schema({ name: String, bars: [{type: ObjectId, ref: 'Bar'}] }); var FooModel = mongoose.model('Foo', foo); var bar = new mongoose.Schema({ foobar: String }); var BarModel = mongoose.model('Bar', bar); Then I want to implement a convenience method like this: BarModel.methods.addFoo = function(foo) { foo.bars = foo.bars || []; // Side note, is this something I should check here? foo.bars.push(this.id); // Here's the line I'm wondering about... Should I include the line below? foo.save(); } The biggest con I see about this is that if I did include foo.save() then I should pass in a callback to addFoo so I avoid issues with the async operation. I'm thinking this is not preferable. But I also think it would be nice to include because addFoo hasn't really "addedFoo" until it's been saved... Am I breaking any design best practices doing it either way?

    Read the article

  • Unsure how to come up with a good design

    - by Mewzer
    Hello there, I am having trouble coming up with a good design for a group of classes and was hoping that someone could give me some guidance on best practices. I have kept the classes and member functions generic to make the problem simpler. Essentially, I have three classes (lets call them A, B, and C) as follows: class A { ... int GetX( void ) const { return x; }; int GetY( void ) const { return y; }; private: B b; // NOTE: A "has-a" B int x; int y; }; class B { ... void SetZ( int value ) { z = value }; private: int z; C c; // NOTE: B "has-a" C }; class C { private: ... void DoSomething(int x, int y){ ... }; void DoSomethingElse( int z ){ ... }; }; My problem is as follows: Class A uses its member variables "x" and "y" a lot internally. Class B uses its member variable "z" a lot internally. Class B needs to call C::DoSomething(), but C::DoSomething() needs the values of X and Y in class A passed in as arguments. C::DoSomethingElse() is called from say another class (e.g. D), but it needs to invoke SetZ() in class B!. As you can see, it is a bit of a mess as all the classes need information from one another!. Are there any design patterns I can use?. Any ideas would be much appreciated ....

    Read the article

  • java.lang.IllegalStateException: The content of the adapter has changed but ListView.... inspite of calling notifydatasetchanged()

    - by Mistaken
    What are the best practices to be followed to update the contents of a listactivty by a background thread (Async Task) ? 1) Am calling the notifyDataSetChanged() to update the adapter as soon as i manipulate the contents of the adapter but still my app force closes while the user scrolls or click on the list. Any pointers to prevent this would be very helpfull. Logcat: java.lang.IllegalStateException: The content of the adapter has changed but ListView did not receive a notification. 2) Where exaclty should i update contents of the listactivity ? inside the doInBackground() or onProgressUpdate()? 3) Am experiencing regular crashes when the user clicks the list item. So will disabling click events on the listactivty during the background operation solve the problem ? If so am not sure how to remove or set item click listeners dynamically to the listactivity. Please instruct me on the too. 4) I dont think blocking all ui interactions during the background async task execution is the only way to solve the problem. I know there is a simple way of doing this but need some help. Thanks in advance. This is my onCreate... protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.fa); tvStatus=(TextView) findViewById(R.id.tvStatus); adapter = new SimpleAdapter( this, mostPopularList, R.layout.list_item, new String[] {"title","author","views","date"}, new int[] {R.id.textView1,R.id.textView2,R.id.textView4,R.id.textView3}); //populateList(); setListAdapter(adapter); } My async task... private class LongOperation extends AsyncTask<String, Void, String> { @Override protected String doInBackground(String... params) { // code for adding new listactivty items } @Override protected void onPostExecute(String networkStatus) { adapter.notifyDataSetChanged(); } @Override protected void onPreExecute() { } @Override protected void onProgressUpdate(Void... values) { } }

    Read the article

  • Security benefits from a second opinion, are there flaws in my plan to hash & salt user passwords vi

    - by Tchalvak
    Here is my plan, and goals: Overall Goals: Security with a certain amount of simplicity & database-to-database transferrability, 'cause I'm no expert and could mess it up and I don't want to have to ask a lot of users to reset their passwords. Easy to wipe the passwords for publishing a "wiped" databased of test data. (e.g. I'd like to be able to use a postgresql statement to simply reset all passwords to something simple so that testers can use that testing data for themselves). Plan: Hashing the passwords Account creation records the original email that an account is created with, forever. A global salt is used, e.g. "90fb16b6901dfceb73781ba4d8585f0503ac9391". An account specific salt, the original email the account was created with, is used, e.g. "[email protected]". The users's password is used, e.g. "password123" (I'll be warning against weak passwords in the signup form) The combination of the global salt, account specific salt, and password is hashed via some hashing method in postgresql (haven't been able to find documentation for hashing functions in postgresql, but being able to use sha-2 or something like that would be nice if I could find it). The hash gets stored in the database. Recovering an account To change their password, they have to go through standard password reset (and that reset email gets sent to the original email as well as the most recent account email that they have set). Flaws? Are there any flaws with this that I need to address? And are there best practices to doing hashing fully within postgresql?

    Read the article

  • subversion: how to manage tweaked files

    - by punk4funk
    Our group is considering moving to SVN. But, I can't seem to find a way to do the following: I need to make minor tweaks locally to about 20 files in the repository w/o having SVN consider them "changed" and included in the commit. (Changes like communication time-outs and logging levels.) Ideally I would want to merge the tweaked files to newer versions in the repository. (Keeping the tweaked local file up-to-date with committed changes form other users.) I can't imagine we're unique in wanting/needing this. Are there best practices around this type of use case? One thing I'm considering is putting all the tweaked files into a branched "tweaked" working copy. Then merging my tweaked files into my "official" working copy. Then using a script, which compares the "tweaked" and "official" working copies, to update my ignore list. The script would also un-ignore and alert me to any files that had tweaks and other changes that, presumably, needed to be committed to the repository. This seems kinda hacky and I can't imagine there's not a better way.

    Read the article

  • Opening up process and grabbing window title. Where did I go wrong?

    - by user1632018
    In my application I allow for the users to add a program from a open file dialog, and it then adds the item to a listview and saves the items location into the tag. So what I am trying to do is when the program in the listview is selected and the button is pressed, it starts a timer and this timer checks to see if the process is running, and if it isn't launches the process, and once the process is launched it gets the window title of the process and sends it to a textbox on another form. EDIT: The question is if anyone can see why it is not working, by this I mean starting the process, then when it's started closing the form and adding the process window title to a textbox on another form. I have tried to get it working but I can't. I know that the process name it is getting is right I think my problem is to do with my for loop. Basically it isn't doing anything visible right now. I feel like I am very close with my code and im hoping it just needs a couple minor tweaks. Any help would be appreciated. Sorry if my coding practices aren't that great, im pretty new to this. EDIT:I thought I found a solution but it only works now if the process has been started already. It won't start it up. Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick Dim s As String = ListView1.SelectedItems(0).Tag Dim myFile As String = Path.GetFileName(s) Dim mu As String = myFile.Replace(".exe", "").Trim() Dim f As Process Dim p As Process() = Process.GetProcessesByName(mu) For Each f In p If p.Length > 0 Then For i As Integer = 0 To p.Length - 1 ProcessID = (p(i).Id) AutoMain.Name.Text = f.MainWindowTitle Timer1.Enabled = False Me.Close() Next Else ProcessID = 0 End If If ProcessID = 0 Then Process.Start(myFile) End If Next End Sub

    Read the article

  • cocos2d - how to draw a bottle sprite with dynamically changing water level

    - by Oliver
    I am trying to draw a (2d) sprite in cocos2d showing a bottle. The bottle shall be able to have a dynamic water level (i.e. the amount of water in the bottle can change over the lifetime of the sprite). I am wondering how to do this. I currently have a PNG file of the empty bottle. I adjusted the alpha channel of that PNG so when rendering the sprite I can draw a blue rectangle and render the bottle texture over it. That will give the impression of the water being inside the bottle. However, the bottle's shape is not a rectangle itself of course, so the water can be seen out of the bounds of the bottle. I can change the bottle image in a way that only the bottle itself is transparent and set the "outside world" to an opaque color & alpha channel value, but that again prevents the "world background" to be visible in that area. I simply don't have a clue how to realize this in a sane manner. Do I really have to read every pixel of the bottle image, identify which pixel is "inside" of the bottle and then draw the water pixel by pixel? There must be an easier way, right? ;) Any best practices for these kinds of tasks? edit: see picture below, to make somewhat clearer, what I am talking about ;) http://i47.tinypic.com/10rqww0.png

    Read the article

  • Where to start with the development of first database driven Web App (long question)?

    - by Ryan
    Hi all, I've decided to develop a database driven web app, but I'm not sure where to start. The end goal of the project is three-fold: 1) to learn new technologies and practices, 2) deliver an unsolicited demo to management that would show how information that the company stores as office documents spread across a cumbersome network folder structure can be consolidated and made easier to access and maintain and 3) show my co-workers how Test Drive Development and prototyping via class diagrams can be very useful and reduces future maintenance headaches. I think this ends up being a basic CMS to which I have generated a set of features, see below. 1) Create a database to store the site structure (organized as a tree with a 'project group'-project structure). 2) Pull the site structure from the database and display as a tree using basic front end technologies. 3) Add administrator privileges/tools for modifying the site structure. 4) Auto create required sub pages* when an admin adds a new project. 4.1) There will be several sub pages under each project and the content for each sub page is different. 5) add user privileges for assigning read and write privileges to sub pages. What I would like to do is use Test Driven Development and class diagramming as part of my process for developing this project. My problem; I'm not sure where to start. I have read on Unit Testing and UML, but never used them in practice. Also, having never worked with databases before, how to I incorporate these items into the models and test units? Thank you all in advance for your expertise.

    Read the article

  • Javascript Application Book

    - by Jormundir
    Can anyone recommend a good book on Javascript module/application development. I'm a Software Engineer, so I don't need all the intro to programming stuff. What I'm really looking for is: How do you bundle the html/css/javascript together so that you can make one include that will load the whole application. I.e.: <div id="myapplication"></div> ... ... <script src="myapplication.js"> Design patterns are always welcome. I've already read Javascript the good parts, and online guides, but it's hard to find a comprehensive guide/tutorial for specifically this. There's a lot of good "this is a javascript application" and "this is a scalable framework", but I haven't had any luck with "This is how you build a javascript application, including the html and css, and this is how you deliver it nicely". I'm building a small application to start, so I'm not interested in scalability and large-scale development practices, just a nice and comprehensive guide to get me off the ground.

    Read the article

  • Handling Email Bouncebacks in Rails

    - by aressidi
    Hi there, I've built a very basic CRM system for my rails app that allows me to send weekly user activity digests with custom text and create multi-part marketing messages which I can configure and send through a basic admin interface. I'm happy with what I've put together on the send-side of things (minus the fact that I haven't tried to volume test its capabilities), but I'm concerned about how to handle bounce-backs. I came across this plugin with associated scripts: http://github.com/kovyrin/bounces-handler I'm using Google Apps to handle my mail and really don't know enough about Perl to want to mess with the above plugin - I have enough headaches. I'm looking for a simple solution for processing bounce-backs in Rails. All my email will go out from an address like this which will be managed in Google Apps: "[email protected]." What's the best workflow for this? Can anyone post an example solution they're using keeping in mind the fact that I'm using Google Apps for the mail? Any guidance, links, or basic workflow best-practices to handle this would be greatly appreciated. Thanks! -A

    Read the article

  • Question about architecting asp.net mvc application ?

    - by Misnomer
    I have read little bit about architecture and also patterns in order to follow the best practices. So this is the architecture we have and I wanted to know what you think of it and any proposed changes or improvements to it - Presentation Layer - Contains all the views,controllers and any helper classes that the view requires also it containes the reference to Model Layer and Business Layer. Business Project - Contains all the business logic and validation and security helper classes that are being used by it. It contains a reference to DataAccess Layer and Model Layer. Data Access Layer - Contains the actual queries being made on the entity classes(CRUD) operations on the entity classes. It contains reference to Model Layer. Model Layer - Contains the entity framework model,DTOs,Enums.Does not really have a reference to any of the above layers. What are your thoughts on the above architecture ? The problem is that I am getting confused by reading about like say the repository pattern, domain driven design and other design patterns. The architecture we have although not that strict still is relatively alright I think and does not really muddle things but I maybe wrong. I would appreciate any help or suggestions here. Thanks !

    Read the article

  • What is the best way to embed SQL in VB.NET.

    - by Amy P
    I am looking for information on the best practices or project layout for software that uses SQL embedded inside VB.NET or C#. The software will connect to a full SQL DB. The software was written in VB6 and ported to VB.NET, we want to update it to use .NET functionality but I am not sure where to start with my research. We are using Visual Studio 2005. All database manipulations are done from VB. Update: To clarify. We are currently using SqlConnection, SqlDataAdapter, SqlDataReader to connect to the database. What I mean by embed is that the SQL stored procedures are scripted inside our VB code and then run on the db. All of our tables, stored procs, views, etc are all manipulated in the VB code. The layout of our code is quite messy. I am looking for a better architecture or pattern that we can use to organize our code. Can you recommend any books, webpages, topics that I can google, etc to help me better understand the best way for us to do this.

    Read the article

  • Rails exit controller after rendering

    - by codysehl
    I have an action in my controller that I am having trouble with. This is my first rails app, so I'm not sure of the best practices surrounding rails. I have a model called Group and a few actions that go in it's controller. I have written a test that should cause the controller to render an error in JSON because of an invalid Group ID. Instead of rendering and exiting, it looks like the controller is rendering and continuing to execute. Test test 'should not remove group because of invalid group id' do post(:remove, {'group_id' => '3333'}) response = JSON.parse(@response.body) assert_response :success assert_equal 'Success', response['message'] end Controller action # Post remove # group_id def remove if((@group = Group.find_by_id(params[:group_id])) == nil) render :json => { :message => "group_id not found" } end @group.destroy if(!Group.exists?(@group)) render :json => { :message => "Success" } else render :json => { :errors => @group.errors.full_messages } end end In the controller, the first if statement executes: render :json => { :message => "group_id not found" } but @group.destroy is still being executed. This seems counter-intuitive to me, I would think that the render method should exit the controller. Why is the controller not exiting after render is called? The purpose of this block of code is to recover gracefully when no record can be found with the passed in ID. Is this the correct way of doing something like this?

    Read the article

  • OOP + MVC advice on Member Controller

    - by dan727
    Hi, I am trying to follow good practices as much as possible while I'm learning using OOP in an MVC structure, so i'm turning to you guys for a bit of advice on something which is bothering me a little here. I am writing a site where I will have a number of different forms for members to fill in (mainly data about themselves), so i've decided to set up a Member controller where all of the forms relating to the member are represented as individual methods. This includes login/logout methods, as well as editing profile data etc. In addition to these methods, i also have a method to generate the member's control panel widget, which is a constant on every page on the site while the member is logged in. The only thing is, all of the other methods in this controller all have the same dependencies and form templates, so it would be great to generate all this in the constructor, but as the control_panel method does not have the same dependencies etc, I cannot use the constructor for this purpose, and instead I have to redeclare the dependencies and same template snippets in each method. This obviously isn't ideal and doesn't follow DRY principle, but I'm wondering what I should do with the control_panel method, as it is related to the member and that's why I put it in that controller in the first place. Am I just over-complicating things here and does it make sense to just move the control_panel method into a simple helper class? Here are the basic methods of the controller: class Member_Controller extends Website_Controller { public function __construct() { parent::__construct(); if (request::is_ajax()) { $this->auto_render = FALSE; // disable auto render } } public static function control_panel() { //load control panel view $panel = new View('user/control_panel'); return $panel; } public function login() { } public function register() { } public function profile() { } public function household() { } public function edit_profile() { } public function logout() { } }

    Read the article

  • Is there any well-known paradigm for iterating enum values?

    - by SadSido
    I have some C++ code, in which the following enum is declared: enum Some { Some_Alpha = 0, Some_Beta, Some_Gamma, Some_Total }; int array[Some_Total]; The values of Alpha, Beta and Gamma are sequential, and I gladly use the following cycle to iterate through them: for ( int someNo = (int)Some_Alpha; someNo < (int)Some_Total; ++someNo ) {} This cycle is ok, until I decide to change the order of the declarations in the enum, say, making Beta the first value and Alpha - the second one. That invalidates the cycle header, because now I have to iterate from Beta to Total. So, what are the best practices of iterating through enum? I want to iterate through all the values without changing the cycle headers every time. I can think of one solution: enum Some { Some_Start = -1, Some_Alpha, ... Some_Total }; int array[Some_Total]; and iterate from (Start + 1) to Total, but it seems ugly and I have never seen someone doing it in the code. Is there any well-known paradigm for iterating through the enum, or I just have to fix the order of the enum values? (let's pretend, I really have some awesome reasons for changing the order of the enum values)...

    Read the article

  • How to conduct an interview for a development position remotely?

    - by sharptooth
    Usually we run interviews in office. We have a room with a table, the interviewee and one or two interviewers sit at the table, interviewers ask questions, often accompanied with code snippets on paper, the interviewee (hopefully) answers them, writes code snippets to illustrate his point. Usually it's something like an interviewer writes about five lines of C++ code and asks some specific question - quite a little code. Now we need to do the same remotely. We will be in our office and the interviewee will be far away - we are asked to help hire a person for another office located abroad. Of course we can use some technology for voice calls, but I'm afraid it's the most we can count on. I see a whole set of obstacles here: how to write illustration code snippets and exchange them efficiently? what to do to compensate for the fact that we're not native English speakers and the interviewees might or might be not native English speakers (I'm afraid this can make conversation significantly harder)? Are there any best practices for this situation? How could we address the obstacles listed? What other things should we consider to run the interview most efficiently?

    Read the article

  • Writing all the html of a document with jquery instead of in the page body?

    - by Robert
    I'm a UI person currently working on a web application, where most of the people I work with are back end developers. I'm currently at a disagreement with them about whether or not the above is a prudent thing to do. This application doe use quite a bit of JavaScript, and wouldn't even work without it unfortunately. This being the case, One of the back end developers that I'm working with is claiming that pages could and even SHOULD be build completely with JavaScript or jquery. This caught me completely off guard. We're talking about div tags, lists, background images and text here. I'm trying to explain to him that this isn't the right way to do things at all, and from a best practices perspective: content(html) should be separate from presentation(css), and behavior(script etc.). I know that it's possible to write html in jquery, although I haven't done it, but am I wrong in my thinking that this isn't the way things should be done. Is it even possible to write ALL the code with jquery? would love to hear any thoughts either way, as I will be discussing this with him again tomorrow.

    Read the article

  • Best method to cache objects in PHP?

    - by Martin Bean
    Hi, I'm currently developing a large site that handles user registrations. A social networking website for argument's sake. However, I've noticed a lag in page loads and deciphered that it is the creation of objects on pages that's slowing things down. For example, I have a Member object, that when instantiated with an ID passed as a construct parameter, it queries the database for that members' row in the members database table. Not bad, but this is created each time a page is loaded; and called more than once when say, calling an array of that particular members' friends, as a new Member object is created for each friend. So on a single page I can have upwards of seven of the same object, but containing different properties. What I'm wanting to do is to find a way to reduce the database load, and to allow persist objects between page loads. For example, the logged in user's object to be created on login (which I can do) but then stored somewhere for retrieval so I don't have to keep re-creating the object between page loads. What is the best solution for this? I've had a look at Memcache, but with it being a third-party module I can't have the web host install it on this occasion. What are my alternatives, and/or best practices in my case? Thanks in advance.

    Read the article

  • Proper way to communicate between divs in jquery?

    - by folder
    This is probably a simple question, and i'm just being dense. I've looked through a few jquery books and nothing has jumped out at me, i'm probably missing something. I'm looking for the 'proper', best practices way to communicate between divs/dom items on a page? For example, I have a page with 5 panels that display when a link is chosen, they hide/show/run some code that changes other pieces on the page. Something like this snippet: <ul> <li><div id="unique_name_1_anchor">Unique div 1</div></li> <li><div id="unique_name_2_anchor">Unique div 2</div></li> <li><div id="unique_name_3_anchor">Unique div 3</div></li> <li><div id="unique_name_4_anchor">Unique div 4</div></li> </ul> ...Somewhere else on the page <div id="unique_name_1_panel">Some panel 1 stuff here</div> <div id="unique_name_2_panel">Some panel2 stuff here<div> <div id="unique_name_3_panel">Some panel3 here</div> <div id="unique_name_4_panel">Some panel4 here</div> The concept being when as user clicks on a unique_name_X_anchor div, some action is performed on the corresponding panel (ie show/hide etc...). What I have been doing now is parsing the id ie ($(this).replace("_anchor","_panel") to get the div id of the other dom element. This just seems clunky and there must be a better/more proper way of doing this. Suggestions? Thanks

    Read the article

  • What is the best practice when using UIStoryboards?

    - by Scott Sherwood
    Having used storyboards for a while now I have found them extremely useful however, they do have some limitations or at least unnatural ways of doing things. While it seems like a single storyboard should be used for your app, when you get to even a moderately sized application this presents several problems. Working within teams is made more difficult as conflicts in Storyboards can be problematic to resolve (any tips with this would also be welcome) The storyboard itself can become quite cluttered and unmanageable. So my question is what are the best practices of use? I have considered using a hybrid approach having logical tasks being split into separate storyboards, however this results in the UX flow being split between the code and the storyboard. To me this feels like the best way to create reusable actions such as login actions etc. Also should I still consider a place for Xibs? This article has quite a good overview of many of the issues and it proposes that for scenes that only have one screen, xibs should be used in this case. Again this feels unusual to me with Apples support for instantiating unconnected scenes from a storyboard it would suggest that xibs won't have a place in the future but I could be wrong.

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections

    - by Hammer Bro.
    Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >