Search Results

Search found 16547 results on 662 pages for 'physical design'.

Page 86/662 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Looking for a better Factory pattern (Java)

    - by Sam Goldberg
    After doing a rough sketch of a high level object model, I am doing iterative TDD, and letting the other objects emerge as a refactoring of the code (as it increases in complexity). (That whole approach may be a discussion/argument for another day.) In any case, I am at the point where I am looking to refactor code blocks currently in an if-else blocks into separate objects. This is because there is another another value combination which creates new set of logical sub-branches. To be more specific, this is a trading system feature, where buy orders have different behavior than sell orders. Responses to the orders have a numeric indicator field which describes some event that occurred (e.g. fill, cancel). The combination of this numeric indicator field plus whether it is a buy or sell, require different processing buy the code. Creating a family of objects to separate the code for the unique handling each of the combinations of the 2 fields seems like a good choice at this point. The way I would normally do this, is to create some Factory object which when called with the 2 relevant parameters (indicator, buysell), would return the correct subclass of the object. Some times I do this pattern with a map, which allows to look up a live instance (or constructor to use via reflection), and sometimes I just hard code the cases in the Factory class. So - for some reason this feels like not good design (e.g. one object which knows all the subclasses of an interface or parent object), and a bit clumsy. Is there a better pattern for solving this kind of problem? And if this factory method approach makes sense, can anyone suggest a nicer design?

    Read the article

  • Using visitor pattern with large object hierarchy

    - by T. Fabre
    Context I've been using with a hierarchy of objects (an expression tree) a "pseudo" visitor pattern (pseudo, as in it does not use double dispatch) : public interface MyInterface { void Accept(SomeClass operationClass); } public class MyImpl : MyInterface { public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } } This design was, however questionnable, pretty comfortable since the number of implementations of MyInterface is significant (~50 or more) and I didn't need to add extra operations. Each implementation is unique (it's a different expression or operator), and some are composites (ie, operator nodes that will contain other operator/leaf nodes). Traversal is currently performed by calling the Accept operation on the root node of the tree, which in turns calls Accept on each of its child nodes, which in turn... and so on... But the time has come where I need to add a new operation, such as pretty printing : public class MyImpl : MyInterface { // Property does not come from MyInterface public string SomeProperty { get; set; } public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } public void Accept(SomePrettyPrinter printer) { printer.PrettyPrint(this.SomeProperty); } } I basically see two options : Keep the same design, adding a new method for my operation to each derived class, at the expense of maintainibility (not an option, IMHO) Use the "true" Visitor pattern, at the expense of extensibility (not an option, as I expect to have more implementations coming along the way...), with about 50+ overloads of the Visit method, each one matching a specific implementation ? Question Would you recommand using the Visitor pattern ? Is there any other pattern that could help solve this issue ?

    Read the article

  • ASP.NET Webforms developers and web designers: how to interact?

    - by just_name
    I'm an ASP.NET Webforms developer, and I face some problems when I deal with designers. Designers always complain about the asp.net server controls. They'd rather just have an html file and create css files along with the required images to go with those. Sometimes, if the design phase is done in advance, I get html files with related css files, but then we face many problems integrating the design with the aspx files (sever controls an telerik controls ... etc). What I want to ask about is: How do I overcome these problems? The designers prefer php- and mvc developers because of the problems with .net server controls. I need to know how to interact with the designers in the correct way. Are there any tools or applications to provide the designers with the rendered (html page) of the .aspx pages? By that I mean the page in runtime rather than the aspx in Visual Studio. They do use Web Expression but they want the rendered page in html as well.

    Read the article

  • Modular Architecture for Processing Pipeline

    - by anjruu
    I am trying to design the architecture of a system that I will be implementing in C++, and I was wondering if people could think of a good approach, or critique the approach that I have designed so far. First of all, the general problem is an image processing pipeline. It contains several stages, and the goal is to design a highly modular solution, so that any of the stages can be easily swapped out and replaced with a piece of custom code (so that the user can have a speed increase if s/he knows that a certain stage is constrained in a certain way in his or her problem). The current thinking is something like this: struct output; /*Contains the output values from the pipeline.*/ class input_routines{ public: virtual foo stage1(...){...} virtual bar stage2(...){...} virtual qux stage3(...){...} ... } output pipeline(input_routines stages); This would allow people to subclass input_routines and override whichever stage they wanted. That said, I've worked in systems like this before, and I find the subclassing and the default stuff tends to get messy, and can be difficult to use, so I'm not giddy about writing one myself. I was also thinking about a more STLish approach, where the different stages (there are 6 or 7) would be defaulted template parameters. Can anyone offer a critique of the pattern above, thoughts on the template approach, or any other architecture that comes to mind?

    Read the article

  • Should Android and iPhone UI be different?

    - by Phonon
    I'm not completely new to developing apps, but I'm at a point where I'm trying to develop something and deploy it on several mobile platforms. To only concentrate on two major ones, suppose I'm developing an app for Android and iPhone and designing UI and the general user interaction architecture. Both platforms give guidelines as to how their UIs should work. For example, most iPhone apps have the Navigation Bar (the one that says Testing 1 and has a Back button) and an Icon Bar for navigating a program, while Android uses an Options Menu fetched via a Menu button and the "back" navigation is handled with the physical Back button on the device. I've seen many apps that try to force the same UI on every platform. For example, custom-building an iPhone style Icon Bar and putting it in their Android apps, but it just doesn't quite look right to me and it feels like it violates UI design guidelines somewhat. Are there any good design patters for implementing something sufficiently similar on both platforms, yet still platform-specific enough so that the user would not feel out of their comfort zone? What do people usually do in these situations?

    Read the article

  • Object desing problem for simple school application

    - by Aragornx
    I want to create simple school application that provides grades,notes,presence,etc. for students,teachers and parents. I'm trying to design objects for this problem and I'm little bit confused - because I'm not very experienced in class designing. Some of my present objects are : class PersonalData() { private String name; private String surename; private Calendar dateOfBirth; [...] } class Person { private PersonalData personalData; } class User extends Person { private String login; private char[] password; } class Student extends Person { private ArrayList<Counselor> counselors = new ArrayList<>(); } class Counselor extends Person { private ArrayList<Student> children = new ArrayList<>(); } class Teacher extends Person { private ArrayList<ChoolClass> schoolClasses = new ArrayList<>(); private ArrayList<Subject> subjects = new ArrayList<>(); } This is of course a general idea. But I'm sure it's not the best way. For example I want that one person could be a Teacher and also a Parent(Counselor) and present approach makes me to have two Person objects. I want that user after successful logging in get all roles that it has (Student or Teacher or (Teacher & Parent) ). I think I should make and use some interfaces but I'm not sure how to do this right. Maybe like this: interface Role { } interface TeacherRole implements Role { void addGrade( Student student, Grade grade, [...] ); } class Teacher implements TeacherRole { private Person person; [...] } class User extends Person{ ArrayList<Role> roles = new ArrayList<>(); } Please if anyone could help me to make this right or maybe just point me to some literature/article that covers practical objects design.

    Read the article

  • How to correctly write an installation or setup document

    - by UmNyobe
    I just joined a small start-up as a software engineer after graduation. The start-up is 4 year old, and I am working with the CEO and the COO, even if there are some people abroad. Basically they both used to do almost everything. I am currently on some kind of training phase. I have at my disposition architecture, setup and installation internal documentation. Architecture documentation is like a bible and should contain complete information. The rest are used to give directions in different processes. The issue is that these documents are more or less dated, as they just didn't have the time to change them. I will be in charge of training the next hires, and updating these documents is part of my training. In some there is a lot of hard-coded information like: Install this_module_which_still_exists cd this_dir_name_changed cp this_file_name_changed other_dir_name_changed ./config_script.sh ./execute_script.sh The issues i have faced : Either the module installation is completely different (for instance now there is an rpm, or a different OS) Either names changed, and i need to switch old names by new names Description of the purpose of the current step missing. Information about a whole topic is missing Fortunately these guys are around and I get all the information I want and all the explanations I need. I want to bring a design to the next documents so in the future people don't feel like they are completely rewriting a document each time they are updating it. Do you have suggestions? If there is a lightweight design methodology available online you can point me to it's nice too. One thing I will do for sure is set up a versioning repository for the documents alone. There is already one for the source code so I don't know why internal documents deserve a different treatment.

    Read the article

  • How should I structure the implementation of turn-based board game rules?

    - by Setzer22
    I'm trying to create a turn-based strategy game on a tilemap. I'm using design by component so far, but I can't find a nice way to fit components into the part I want to ask. I'm struggling with the "game rules" logic. That is, the code that displays the menu, allows the player to select units, and command them, then tells the unit game objects what to do given the player input. The best way I could thing of handling this was using a big state machine, so everything that could be done in a "turn" is handled by this state machine, and the update code of this state machine does different things depending on the state. However, this approach leads to a large amount of code (anything not model-related) going into a big class. Of course I can subdivide this big class into more classes, but it doesn't feel modular and upgradable enough. I'd like to know of better systems to handle this in order to be able to upgrade the game with new rules without having a monstruous if/else chain (or switch / case, for that matter). Any ideas? What specific design pattern other than MVC should I be using?

    Read the article

  • Help with Abstract Factory Pattern

    - by brazc0re
    I need help with a abstract factory pattern design. This question is a continuation of: Design help with parallel process I am really confused where I should be initializing all of the settings for each type of medium (ex: RS232, TCP/IP, etc). Attached is the drawing on how I am setting up the pattern: As shown, when a medium is created, each medium imposes a ICreateMedium interface. I would assume that the Create() method also create the proper object, such as SerialPort serialPort = new SerialPort("COM1", baud); however, TCPIPMedium would have an issue with the interface because it wouldn't need to initialize a serial port object. I know I am doing something majorly wrong here. I just can't figure it out and have been stuck for a while. What I also get confused on show the interface IMedium will get access to the communication object once it is created so it can write out the appropriate byte[] packet. Any guidance would be greatly appreciated. My main goal is to have the Communicator class spit a packet out without caring which type of medium is active.

    Read the article

  • .net developers and web designers: how to interact?

    - by just_name
    I'm an asp.net developer, and I face some problems when I deal with designers. The designer always complains about the asp.net server controls. They rather just have an html file and create css files along with the required images to go with those. Sometimes if the design phase is done in advance I get html files with related css files, but then we face many problems integrating the design with the aspx files (sever controls an telerik controls ... etc). What I want to ask about is: How to overcome these problems? The designers prefer php- and mvc developers because of the problems with .net server controls. I need to know how to interact with the designers in a correct way. Are there any tools or applications to provide the designers with the rendered (html page) of the .aspx pages? By that I mean the page in runtime rather than the aspx in visual studio. They do use Web Expression but they want the rendered page in html as well.

    Read the article

  • lshw tells me my processor is a 64 bits but my motherboard has a 32 bits width

    - by bpetit
    Recently I noticed lshw tells me a strange thing. Here is the first part of my lshw output: bpetit-1025c description: Notebook product: 1025C (1025C) vendor: ASUSTeK COMPUTER INC. version: x.x serial: C3OAAS000774 width: 32 bits capabilities: smbios-2.7 dmi-2.7 smp-1.4 smp configuration: boot=normal chassis=notebook cpus=2 family=Eee PC... *-core description: Motherboard product: 1025C vendor: ASUSTeK COMPUTER INC. physical id: 0 version: x.xx serial: EeePC-0123456789 slot: To be filled by O.E.M. *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: 1025C.0701 date: 01/06/2012 size: 64KiB capacity: 1984KiB capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd... *-cpu:0 description: CPU product: Intel(R) Atom(TM) CPU N2800 @ 1.86GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: 6.6.1 serial: 0003-0661-0000-0000-0000-0000 slot: CPU 1 size: 798MHz capacity: 1865MHz width: 64 bits clock: 533MHz capabilities: x86-64 boot fpu fpu_exception wp vme de pse tsc ... configuration: cores=2 enabledcores=1 id=2 threads=2 *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 24KiB capacity: 24KiB capabilities: internal write-back unified *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 512KiB capacity: 512KiB capabilities: internal varies unified *-logicalcpu:0 description: Logical CPU physical id: 2.1 width: 64 bits capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 2.2 width: 64 bits capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 2.3 width: 64 bits capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 2.4 width: 64 bits capabilities: logical *-memory description: System Memory physical id: 13 slot: System board or motherboard size: 2GiB *-bank:0 description: SODIMM [empty] product: [Empty] vendor: [Empty] physical id: 0 serial: [Empty] slot: DIMM0 *-bank:1 description: SODIMM DDR3 Synchronous 1066 MHz (0.9 ns) product: SSZ3128M8-EAEEF vendor: Xicor physical id: 1 serial: 00000004 slot: DIMM1 size: 2GiB width: 64 bits clock: 1066MHz (0.9ns) *-cpu:1 physical id: 1 bus info: cpu@1 version: 6.6.1 serial: 0003-0661-0000-0000-0000-0000 size: 798MHz capacity: 798MHz capabilities: ht cpufreq configuration: id=2 *-logicalcpu:0 description: Logical CPU physical id: 2.1 capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 2.2 capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 2.3 capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 2.4 capabilities: logical So here I see my processor is effectively a 64 bits one. However, I'm wondering how my motherboard can have a "32 bits width". I've browsed the web to find an answer, without success. I imagine it's just a technical fact that I don't know about. Thanks.

    Read the article

  • Calculate Quantity Available for POS - Inventory [closed]

    - by tunmise fasipe
    From what I have read Quantity on Hand is the physical number of Items in stock http://www.businessdictionary.com/definition/quantity-on-hand.html Quantity Available is Quantity On Hand minus outbound items (e.g Ordered Quantity) http://community.intuit.com/posts/quantity-on-hand-vs-quantity-available-2 Does this still hold for POS? Can there be outbound items in POS system since items are picked up immediately? If not does that mean QtyOnHand = QtyAvailable for POS?

    Read the article

  • User Control as container

    - by Luca
    I'm designing a simple expander control. I've derived from UserControl, drawn inner controls, built, run; all ok. Since an inner Control is a Panel, I'd like to use it as container at design time. Indeed I've used the attributes: [Designer(typeof(ExpanderControlDesigner))] [Designer("System.Windows.Forms.Design.ParentControlDesigner, System.Design", typeof(IDesigner))] Great I say. But it isn't... The result is that I can use it as container at design time but: The added controls go back the inner controls already embedded in the user control Even if I push to top a control added at design time, at runtime it is back again on controls embedded to the user control I cannot restrict the container area at design time into a Panel area What am I missing? Here is the code for completeness... why this snippet of code is not working? [Designer(typeof(ExpanderControlDesigner))] [Designer("System.Windows.Forms.Design.ParentControlDesigner, System.Design", typeof(IDesigner))] public partial class ExpanderControl : UserControl { public ExpanderControl() { InitializeComponent(); .... [System.Security.Permissions.PermissionSet(System.Security.Permissions.SecurityAction.Demand, Name = "FullTrust")] internal class ExpanderControlDesigner : ControlDesigner { private ExpanderControl MyControl; public override void Initialize(IComponent component) { base.Initialize(component); MyControl = (ExpanderControl)component; // Hook up events ISelectionService s = (ISelectionService)GetService(typeof(ISelectionService)); IComponentChangeService c = (IComponentChangeService)GetService(typeof(IComponentChangeService)); s.SelectionChanged += new EventHandler(OnSelectionChanged); c.ComponentRemoving += new ComponentEventHandler(OnComponentRemoving); } private void OnSelectionChanged(object sender, System.EventArgs e) { } private void OnComponentRemoving(object sender, ComponentEventArgs e) { } protected override void Dispose(bool disposing) { ISelectionService s = (ISelectionService)GetService(typeof(ISelectionService)); IComponentChangeService c = (IComponentChangeService)GetService(typeof(IComponentChangeService)); // Unhook events s.SelectionChanged -= new EventHandler(OnSelectionChanged); c.ComponentRemoving -= new ComponentEventHandler(OnComponentRemoving); base.Dispose(disposing); } public override System.ComponentModel.Design.DesignerVerbCollection Verbs { get { DesignerVerbCollection v = new DesignerVerbCollection(); v.Add(new DesignerVerb("&asd", new EventHandler(null))); return v; } } } I've found many resources (Interaction, designed, limited area), but nothing was usefull for being operative...

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • How to Correct & Improve the Design of this Code?

    - by DaveDev
    HI Guys, I've been working on a little experiement to see if I could create a helper method to serialize any of my types to any type of HTML tag I specify. I'm getting a NullReferenceException when _writer = _viewContext.Writer; is called in protected virtual void Dispose(bool disposing) {/*...*/} I think I'm at a point where it almost works (I've gotten other implementations to work) and I was wondering if somebody could point out what I'm doing wrong? Also, I'd be interested in hearing suggestions on how I could improve the design? So basically, I have this code that will generate a Select box with a number of options: // the idea is I can use one method to create any complete tag of any type // and put whatever I want in the content area <% using (Html.GenerateTag<SelectTag>(Model, new { href = Url.Action("ActionName") })) { %> <%foreach (var fund in Model.Funds) {%> <% using (Html.GenerateTag<OptionTag>(fund)) { %> <%= fund.Name %> <% } %> <% } %> <% } %> This Html.GenerateTag helper is defined as: public static MMTag GenerateTag<T>(this HtmlHelper htmlHelper, object elementData, object attributes) where T : MMTag { return (T)Activator.CreateInstance(typeof(T), htmlHelper.ViewContext, elementData, attributes); } Depending on the type of T it'll create one of the types defined below, public class HtmlTypeBase : MMTag { public HtmlTypeBase() { } public HtmlTypeBase(ViewContext viewContext, params object[] elementData) { base._viewContext = viewContext; base.MergeDataToTag(viewContext, elementData); } } public class SelectTag : HtmlTypeBase { public SelectTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("select"); //base.MergeDataToTag(viewContext, elementData); } } public class OptionTag : HtmlTypeBase { public OptionTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("option"); //base.MergeDataToTag(viewContext, _elementData); } } public class AnchorTag : HtmlTypeBase { public AnchorTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("a"); //base.MergeDataToTag(viewContext, elementData); } } all of these types (anchor, select, option) inherit from HtmlTypeBase, which is intended to perform base.MergeDataToTag(viewContext, elementData);. This doesn't happen though. It works if I uncomment the MergeDataToTag methods in the derived classes, but I don't want to repeat that same code for every derived class I create. This is the definition for MMTag: public class MMTag : IDisposable { internal bool _disposed; internal ViewContext _viewContext; internal TextWriter _writer; internal TagBuilder _tag; internal object[] _elementData; public MMTag() {} public MMTag(ViewContext viewContext, params object[] elementData) { } public void Dispose() { Dispose(true /* disposing */); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (!_disposed) { _disposed = true; _writer = _viewContext.Writer; _writer.Write(_tag.ToString(TagRenderMode.EndTag)); } } protected void MergeDataToTag(ViewContext viewContext, object[] elementData) { Type elementDataType = elementData[0].GetType(); foreach (PropertyInfo prop in elementDataType.GetProperties()) { if (prop.PropertyType.IsPrimitive || prop.PropertyType == typeof(Decimal) || prop.PropertyType == typeof(String)) { object propValue = prop.GetValue(elementData[0], null); string stringValue = propValue != null ? propValue.ToString() : String.Empty; _tag.Attributes.Add(prop.Name, stringValue); } } var dic = new Dictionary<string, object>(StringComparer.OrdinalIgnoreCase); var attributes = elementData[1]; if (attributes != null) { foreach (PropertyDescriptor descriptor in TypeDescriptor.GetProperties(attributes)) { object value = descriptor.GetValue(attributes); dic.Add(descriptor.Name, value); } } _tag.MergeAttributes<string, object>(dic); _viewContext = viewContext; _viewContext.Writer.Write(_tag.ToString(TagRenderMode.StartTag)); } } Thanks Dave

    Read the article

  • Blu-ray BD-R: Would you physically store it in a CaseLogic Wallet pocket?

    - by Rob
    I keep several backup copies of my material and files. For my DVDs, one set of copies is kept in a CaseLogic wallet folder pack, so that I can easily move this around when visiting friends, family or for business. This is highly convenient. The other sets are kept in their jewel cases in hard plastic see thru storage boxes. Although CaseLogic wallet material is designed to be abrasion free, their caveat is that external dust will be the cause of any blemishes. If hard dust gets in these pockets, which is inevitable, this will occasionally cause light hair like scratches on the disc surface as the discs are removed and returned for access to their contents. This is of no consequence as the laser and error correction can more than cope with this. I'm aware that the blu-ray spec requires anti-scratch in disc surfaces but was wondering that, given the smaller pits, would dust and light scratches from wallet storage cause more problems with blu-rays than they would with DVDs? I'm using Blu-ray BD-R and BD-R DL write once media.

    Read the article

  • Steps to take when technical staff leave

    - by Tom O'Connor
    How do you handle the departure process when privileged or technical staff resign / get fired? Do you have a checklist of things to do to ensure the continuing operation / security of the company's infrastructure? I'm trying to come up with a nice canonical list of things that my colleagues should do when I leave (I resigned a week ago, so I've got a month to tidy up and GTFO). So far I've got: Escort them off the premises Delete their email Inbox (set all mail to forward to a catch-all) Delete their SSH keys on server(s) Delete their mysql user account(s) ... So, what's next. What have I forgotten to mention, or might be similarly useful? (endnote: Why is this off-topic? I'm a systems administrator, and this concerns continuing business security, this is definitely on-topic.)

    Read the article

  • Open source system for swipe card access?

    - by Moduspwnens
    We're looking at replacing our campus-wide magnetic swipe card system with something more robust. The "programmer" side of me says there's got to be an open-source, scalable solution that already does this, but all I've been able to find are proprietary vendor-specific solutions. Ideally, it'd have the following: Based on some open standard that allows us to select from a wide selection of card readers (like IMAP or HTTP) Support different kinds of card access (magnetic strip, RFIDs, etc.) Future-proof (to the extent possible) The lack of information I'm finding leads me to believe I'm not searching for the right things... or such a solution doesn't exist. Is there not some basic, open-source solution to this (like MySQL for databases, or Moodle for an LMS, or Apache for a web server)?

    Read the article

  • Does more heat generation mean more wear and tear?

    - by Suhail Gupta
    I read that hardware generally used on PC is not optimized for running Linux. That is the reason the machine emits large amount of heat and doesn't give the battery back up , that we will get while working on windows. ~REF Does it also mean more wear and tear of the hardware (when using linux as compared when using windows ) ? Note : I have personally experienced large heat emission while working with Fedora 16 and Ubuntu 11.xx on my laptop.

    Read the article

  • How much does it wear an SD card to be frequently removed/reinserted?

    - by jtbandes
    My digital camera (a Sony a55) stores photos on an SD card. When I want to transfer these to my computer (a mid-2010 MacBook Pro), I have two options: use the USB cable to connect the camera to the computer, or use the computer's built-in SD card reader. The camera's SD card slot is the standard click-in, click-out (spring-loaded) mechanism. My laptop has a simple slot into which the card slides with a little more resistance than the former (the card slides only about halfway in so it can be easily removed). I notice that the card's contacts now have some shiny marks from one or both of these card slots: Does this type of wear threaten to significantly damage the card? Should I avoid switching the card between slots frequently, to extend its lifetime?

    Read the article

  • Microsoft Ergonomic Keyboards With Card Readers?

    - by Steve
    When I started working at my current job I developed tendinitis in my wrists. Luckily that cleared up when I started using a Microsoft ergonomic keyboard. The problem is that where I work is moving to more security. We will need to stick a card into a slot to log into our PCs. They bought a bunch of new keyboards with these slots built in. All regular keyboards. Is there something like the Microsoft Ergonomic keyboard that comes with such a card slot? Thanks.

    Read the article

  • Do you work in your server room?

    - by Gary Richardson
    I once had a job offer from a company that wanted my workstation to be in the AC controlled, noisy server room with no natural light. I'm not sure what their motivation was. Possibly it made sense to them for me to be close to the servers, or possibly they wanted to save the desk space for other employees. I turned down the job (for many reasons, including the working environment). Is this a common practice? Do you work in your LAN room? How do you cope?

    Read the article

  • How to encrypt dual boot windows 7 and xp (bitlocker, truecrypt combo?) on sdd (recommended?)

    - by therobyouknow
    I would like to setup a dual boot Windows 7 and Windows XP laptop/notebook computer where each operation system's partition is fully encrypted. I would like to do this on a SSD - a 128Gb Crucial M4. My research Dual boot of truecrypt encrypted OSs on one drive (not possible - in Truecript 7.x at time of writing) This cannot be done on a standard Truecrypt setup - it will only support encrypting one of the operating systems. I have tried this and also read about it here on superuser.com However, I did see a solution here that uses grub4dos as the initial bootloader to chain to separate truecrypt encrypted OSs, in my case Windows 7 and Windows XP: http://yyzyyz.blogspot.co.uk/2010/06/truecrypt-how-to-encrypt-multiple.html I am not going to consider this solution as it relies upon some custom code for use in the bootloader that is provided by the author. I would prefer a solution that can be fully understood so that I can be sure that there is nothing undesirable occuring (i.e. malware or just simply bugs in the code). I would like to believe such a solution doesn't have those risks but I can't be sure. BitLocker and Truecrypt combination - possible solution? So I am now considering a combination of encryption programs: I now aim to encrypt Windows XP with Truecrypt and Windows 7 with BitLocker. Assuming Truecrypt bootloader can boot into non-Truecrypt OSs (e.g. via hitting Escape to go to another menu), then this solution may be viable. SSDs and Encryption (use fastest possible spinning hard disk instead (?)) I read on various superuser.com posts and elsewhere that current SSDs are not suited to whole drive encryption for various reasons: impact of performance algorithms that give SSDs advantage over spinning harddisks. Algorithms used in compression of data for example. Wear on the SSD, shortening its life Security issues whereby data is repeated, as indicated in some Truecrypt documentation So I am now considering not using SSD. But with the aim to have the fastest drive possible, I am considering using the Western Digital Scorpion black 2.5" 7200rpm harddisk as this appears to be top rated among spinning platter-based harddrives (don't work for Western Digital). Summary So to achieve whole drive encrypted dual boot Windows 7 and Windows XP with minimal performance impact I intend to use a combination of Truecrypt and Bitlocker on a top-rated conventional spinning platter-based harddisk. Questions Will my summary: achieve whole disk encryption of the dual-boot Windows XP, Windows 7? OR an you suggest a simpler solution, including one that only requires only Truecrypt (BitLocker not available on XP). Or another encryption tool, including paid-for? provide the highest performance. Am I correct to avoid using SDD with encryption for the reasons I discovered? Are the concerns about SSDs and encryption still very real (some articles I read go back to 2010) Thanks for your input!

    Read the article

  • How to minimize the risk of employees spreading critical information?

    - by Industrial
    Hi everyone, What's common sense when it comes to minimising the risk of employees spreading critical information to rivalling companies? As of today, it's clear that not even the US government and military can be sure that their data stays safely within their doors. Thereby I understand that my question probably instead should be written as "What is common sense to make it harder for employees to spread business critical information?" If anyone would want to spread information, they will find a way. That's the way life work and always has. If we make the scenario a bit more realistic by narrowing our workforce by assuming we only have regular John Does onboard and not Linux-loving sysadmins , what should be good precautions to at least make it harder for the employees to send business-critical information to the competition? As far as I can tell, there's a few obvious solutions that clearly has both pros and cons: Block services such as Dropbox and similar, preventing anyone to send gigabytes of data through the wire. Ensure that only files below a set size can be sent as email (?) Setup VLANs between departments to make it harder for kleptomaniacs and curious people to snoop around. Plug all removable media units - CD/DVD, Floppy drives and USB Make sure that no configurations to hardware can be made (?) Monitor network traffic for non-linear events (how?) What is realistic to do in a real world? How does big companies handle this? Sure, we can take the former employer to court and sue, but by then the damage has already been caused... Thanks a lot

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >