Search Results

Search found 8266 results on 331 pages for 'distributed systems'.

Page 43/331 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • What's hot in Oracle Premier Support News for Solaris, Storage and Systems - How to Patch!

    - by user12244613
    Struggling with locating patches for Sun products? Can't find your Oracle System Drivers? This question has been raised many times by customers and was the source of a short video in the Oracle System Support Newsletter in February 2012. The transition between SunSolve and My Oracle Support is to change how you think about the type of patch your looking for. For example, in SunSolve you might have typed e1000g if looking for an Enternet Driver.. but entering e1000g will not find anything in My Oracle Support - Patches and Update Menu. As you need to use the Product (Advanced) search which is driven of the Product Name, therefore you need to type "Ethernet" and select the ethernet product you are looking for to locate the patches for this product. Just to recap that video: If you are looking for the e1000g Ethernet Driver - You need to use Advance Search and search for Enternet 1. Log into My Oracle Support - Select Patches and Updates - Select Product or Family (Advanced Search). 2. In the product line enter: Ethernet and select the product name from the menu. 3. Check remove supersede patches - that ensure you only get relevant current patches in the results. 4. Select Search and the results are displayed. Now you have more options to include the platform (Solaris,Linux etc.) if want to further narrow the search. Need more information? Log into My Oracle Support and what a short 90sec video I put together. View the 7 minute Video using Firefox/chrome – It shows searching for individual patches, Solaris, Firmware etc. If you are not receiving the Oracle System Support Newsletter: Option (a) Within My Oracle Support, make document id: 1363390.1 a favourite and revisit it on the 2nd of each month for the latest content. Option (b) By default the Newsletter  is sent to all customers who have logged a Service Request on an Oracle Systems Hardware Product during the last 12months, unless you have opted out to receiving Oracle Communications on your profile on http://oracle.com

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

    - by fip
    The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites: How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic? How to make sure the messages are evenly distributed (load balanced) to SOA cluster members? Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication. 2. The typical configuration In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics. Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps: Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) Here are more details of each step: Step A. Configure JMS Topic to be UDD and PDT, You do this at the WebLogic cluster that house the JMS Topic. You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively Step B: Configure ServerProperties of JCA Connection Factory You do this step at the SOA cluster. This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client". When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list: ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties. Please note: Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID. The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory. Here is the example setting: Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber': <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"> <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/> <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"> <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"> <property name="DurableSubscriber" value="MySubscriberID-1"/> <property name="PayloadType" value="TextMessage"/> <property name="UseMessageListener" value="false"/> <property name="DestinationName" value="jms/MyTestUDDTopic"/> </activation-spec> </endpoint-activation> </adapter-config> You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project. 2.The "atypical" configurations: For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here: Configuration A: The JMS Topic is NOT PDT In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages. Configuration B. The ClientIDPolicy=RESTRICTED In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter. References: Managing Durable Subscription WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions JMS Troubleshooting: Configuring JMS Message Logging: Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API

    Read the article

  • Do I need to contact a lawyer to report a GPL violation in software distributed on Apple's App Store?

    - by Rinzwind
    Some company is selling software through Apple's App Store which uses portions of code that I released publicly under the GPL. The company is violating the licensing terms in two ways, by (1) not preserving my copyright statement, and not releasing their code under the GPL license and (2) by distributing my GPL-licensed code through Apple's App Store. (The Free Software Foundation has made clear that the terms of the GPL and those of the App Store are incompatible.) I want to report this to Apple, and ask that they take appropriate action. I have tried mailing them to ask for more information about the reporting process, and have received the automated reply quoted below. The last point in the list of things one needs to provide, the “a statement by you, made under penalty of perjury,” sounds as if they mean some kind of specific legal document. I'm not sure. Does this mean I need to contact a lawyer just to file the report? I'd like to avoid going through that hassle if at all possible. (Besides an answer to this specific question, I'd welcome comments and experience reports from anyone who has already had to deal with a GPL violation on Apple's App Store.) Thank you for contacting Apple's Copyright Agent. If you believe that your work has been copied in a way that constitutes infringement on Apple’s Web site, please provide the following information: an electronic or physical signature of the person authorized to act on behalf of the owner of the copyright interest; a description of the copyrighted work that you claim has been infringed; a description of where the material that you claim is infringing is located on the site; your address, telephone number, and email address; a statement by you that you have a good faith belief that the disputed use is not authorized by the copyright owner, its agent, or the law; a statement by you, made under penalty of perjury, that the above information in your Notice is accurate and that you are the copyright owner or authorized to act on the copyright owner’s behalf. For further information, please review Apple's Legal Information & Notices/Claims of Copyright Infringement at: http://www.apple.com/legal/trademark/claimsofcopyright.html To expedite the processing of your claim regarding any alleged intellectual property issues related to iTunes (music/music videos, podcasts, TV, Movies), please send a copy of your notice to [email protected] For claims concerning a software application, please send a copy of your notice to [email protected]. Due to the high volume of e-mails we receive, this may be the only reply you receive from [email protected]. Please be assured, however, that Apple's Copyright Agent and/or the iTunes Legal Team will promptly investigate and take appropriate action concerning your report.

    Read the article

  • What tools are available for remote communication when working from home or with a distributed team?

    - by Ryan Hayes
    My supervisor is allowing my team to dip our toes in the water of working from home. Considering a recent aquisition of another company is requiring some employees to love this new idea which will hack up to an hour off their commute into work every morning, I really want this to succeed. In order to make it a success, we need good tools to make our lives a lot easier. We currently are set up with OpenVPN, and Team Foundation Server 2010 with SharePoint 2010, and use Live Messenger (for SharePoint integration and easier remote desktop) for IM. These are just what we use (and they are currently working well) , but you can suggest other products. So, what are some great tools that will helps us collaborate, communicate, and generally work together when we're hours apart?

    Read the article

  • How is your working time distributed between coding and thinking?

    - by mojuba
    ...in percentage. For example 60/40 or 90/10 or 100/0. My hypothesis is that the bigger the proportion of time you spend thinking the smaller your code can be as a result (and the less time will be needed to write it down). Think more, write less, in other words. Do you think it is true? As a side note, I think in typical software companies thinking is not part of the culture anyway: you are usually supposed to be sitting there at your computer typing something. You will almost definitely be noticed by your managers if you wander about with a blank look thinking over your next steps with your code. Too bad.

    Read the article

  • How are certain analytics metrics (time on site, etc.) usually distributed?

    - by a barking spider
    I'm not sure if I've come to the right place to ask this question, but I'm gathering some information for a research project. We're trying to design an experiment that'll heavily involve web analytics, and I'm trying to figure out some sensible values of mean +/- standard deviation for the following visitor-level (i.e., visitor 1 spends 2 minutes on site, visitor 2 spends 1 minute -- mean 1.5 +/- 0.71...) metrics: time spent on site page views If time allowed, we would put up the sites and gather the information ourselves, but we have a grant deadline coming up. I realize that even though these the distributions of these quantities are probably going to be heavily skewed towards zero, we'll need some reasonable figures or estimates of these figures in order to do sample size calculations, etc. Anyway, I'm not sure where else I'd turn, and I certainly have had a difficult time finding these values in the prior literature. If someone could direct me to a paper with the right information, or if you have these figures on hand (perhaps taken directly from your logs!) -- that would be amazing, and I'd love to hear from you. Thanks in advance, and even though I'm not allowed to reveal too much, rest assured that this info'll be applied towards a good cause :)

    Read the article

  • Nouveau forum sur gestionnaires de sources décentralisée (Distributed Version Control System). Posez

    Bonjour, Les DVCS sont en plein essor ces dernières années, et un forum leur est désormais dédié. Avant de poser votre question, n'oubliez pas de consulter les ressources documentaires :La documentation de Git La documentation de Mercurial La FAQ SCM Si vous souhaitez contribuer à la base documentaire francophone sur ce thème, n'hésitez pas à contacter les responsables bénévoles par mail sur conception [AT] redaction [DASH] developpez [DOT] com...

    Read the article

  • Best setup/workflow for distributed team to integrated DSVC with fragmented huge .NET site?

    - by lazfish
    So we have a team with 2 developers one manager. The dev server sits in a home office and the live server sits in a rack somewhere handled by the larger part of my company. We have freedom to do as we please but I want to incorporate Kiln DSVC and FogBugz for us with some standard procedures to make sense of our decisions/designs/goals. Our main product is web-based training through our .NET site with many videos etc, and we also do mobile apps for multiple platforms. Our code-base is a 15 yr old fragmented mess. The approach has been rogue .asp/.aspx pages with some class management implemented in the last 6 years. We still mix our html/vb/js all on the same file when we add a feature/page to our site. We do not separate the business logic from the rest of the code. Wiring anything up in VS for Intelli-sense or testing or any other benefit is more frustrating than it is worth, because of having to manually rejigger everything back to one file. How do other teams approach this? I noticed when I did wire everything up for VS it wants to make a class for all functions. Do people normally compile DLLs for page-specific functions that won't be reusable? What approaches make sense for getting our practices under control while still being able to fix old anti-patterns and outdated code and still moving towards a logical structure for future devs to build on?

    Read the article

  • What videoconferencing platforms work best for distributed software development teams?

    - by user11347
    Today I had a religious experience: I participated in a videoconference using a high quality Polycom system. This made a huge difference in communication quality -- people that I had a terrible time understanding previously now sounded like Shakespeare. Seeing a high quality video image was enormously helpful. I asked operations how much the Polycom cost and they said that it cost $20K new and $4K off eBay. So this solution doesn't work for people who work from home or who work in offices but are in groups of 3 or fewer people. My budget for a videoconferencing system is a few hundred dollars per person. Skype is not nearly good enough. And I haven't seen a consumer webcam that is good enough either. Does such a solution exist? I'm looking to collaborate both with people who are close by (in the same city but not in the same room) and far away (on different continents).

    Read the article

  • How exactly are Distributed File Systems used in cloud environment?

    - by vaab
    How exactly are Distributed File Systems used in cloud environment ? More precisely: Are live VMs images (or their filesystem) usually located in the DFS ? Are VMs usually used to run the backbone (actual code) of DFS structure ? Precise example citing DFS (ceph, Gluster, GFS, GPFS, Lustre) or cloud environment (Openstack , CloudStack, ...) would be appreciated, even if I'm more interessted by ceph on OpenStack for now.

    Read the article

  • Single database, multiple system dependency

    - by davenewza
    Consider an environment where we have a single, core database, with many separate systems using this one database. This leads to all of these systems have a common dependency, which ultimately introduces coupling between them. This means that we cannot always evolve systems independently of each other. Structural changes to the database (even if only intended for one, particular system), requires a full sweep test of ALL systems, and may require that other systems be 'patched' and subsequently released. This is especially tricky when you want to have separate teams working on different projects. What is a good 'pattern' to help in avoiding such coupling? I would imagine that a database should be exclusively depended on by one system. If other systems require data for whatever reason, they should request such from an API service of some kind. A drawback of this approach which comes to mind is performance: routing data between high-throughput systems through service calls is much slower than through a database connection.

    Read the article

  • What are the advantages of a distributed version control for a team that is effectively never distri

    - by Luke CK
    When working remotely, our team only has access to our source code by remote desktop into our office PCs so we never really work in offline mode. Does a distributed version control system like Mercurial or Git still give us advantages over our current centralized Subversion set up? If so, what are they? Are there any drawbacks or pitfalls? I've read in numerous places that shifting to distributed version control requires a change in thinking. Can someone explain what needs to change in this regard?

    Read the article

  • Distributed version control for HUGE projects - is it feasible?

    - by Vilx-
    We're pretty happy with SVN right now, but Joel's tutorial intrigued me. So I was wondering - would it be feasible in our situation too? The thing is - our SVN repository is HUGE. The software itself has a 15 years old legacy and has survived several different source control systems already. There are over 68,000 revisions (changesets), the source itself takes up over 100MB and I cant even begin to guess how many GB the whole repository consumes. The problem then is simple - a clone of the whole repository would probably take ages to make, and would consume far more space on the drive that is remotely sane. And since the very point of distributed version control is to have a as many repositories as needed, I'm starting to get doubts. How does Mercurial (or any other distributed version control) deal with this? Or are they unusable for such huge projects?

    Read the article

  • Best Solution for Load Balancing geographically distributed NFS File Access?

    - by DairyKnight
    I'm trying to find an optimum solution for accessing the NFS file share in my company. We have a central file server in North America and has 30GB~50GB of updated data everyday. And it's very slow for our Europe and Asia branches to access directly. Therefore, I'm trying to setup two replicate servers in those continents. I'm currently using rsync, but wonder if there exists a better solution acts more like a distributed RAID, which allows the user to transparently access the file whether synced or not. And user request will be dispatched to remote server if the file is not yet synced. I'm now looking into DRBD, but it seems not to have the functionality of auto-dispatching requests. Does anyone know if there's a better solution?

    Read the article

  • Why do we still have to use drive letters to identify file systems?

    - by Charles E. Grant
    A friend has run into a problem where they installed Windows 7 from an external drive, and the internal boot drive is now assigned to H:. Theoretically this shouldn't cause problems because there are programming interfaces for getting the drive letter for the system drive. In practice though, there are quite a few programs that assume that C: is the only possible location for the system directories, and they refuse to run with the system directories on H:. That's not Microsoft's fault, but it's a pain none-the-less. The general consensus seems to be that a re-install, setting the internal boot drive to C:, is the only way to avoid fix these problems. UNIX-like systems display all file systems in a single unified directory tree and mostly seem to avoid problems like this. Is it possible to configure a Windows system without reference to drive letters, or does the importance of backwards compatibility mean that Windows will be working with drive letters from now until doomsday?

    Read the article

  • Best cloud based IT Systems management services out there?

    - by Ryk
    Our startup organisation is growing fast in 2 different office locations. That brings new challenges and headaches. Our entire company is cloud based, and I am looking for a good product to manage our remote systems. Currently we do not have on-site AD servers, we are using the Windows Azure AD services, so cannot rely on group policies at this stage. I would like to be able to achieve the following: (they are all laptops) Remote Desktop Support Patch management Lock down software on machines (restrict them) Monitor and manage systems Other benefits would be good, but if I can achieve the ones listed above, it will go a long way. We have a combination of Windows 7 pro & Windows 8 & 8.1 machines. I am currently using Windows Intune, but it is really limited. Really just a glorified patch enforcer. Thank you in advance to your help.

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • Team Foundation Server vs. SVN and other source control systems

    - by micha12
    We are currently looking for a version control system to use in our projects. Up to now we have been using VSS, but nowadays more powerful source control systems exists like TFS, SVN, etc. We are planning to migrate our projects to Visual Studio 2010, so the first idea coming to mind is to start using TFS 2010. I have never worked with SVN and other version control systems. My question is: how good is TFS compared to other source control systems? Is it a good idea using it, or should we rather use SVN (or any other system)? Thank you.

    Read the article

  • How do I fix a permissions problem with MS Distributed File System?

    - by charlesrandall
    I have a computer that is new, Windows 7, that is supposed to have access to particular network resources on a Distributed File System. However, despite all permissions being set correctly, I have consistent trouble accessing them. For instance, I'm supposed to be able to reach \company.org\main\subdir. All the permissions have been granted, only when I try to access it by name, it tells me I don't have permission to access \main. This is where the fun starts. If I ping company.org, get the IP, replace company.org by the IP, I can then access \IP\main\subdir without any problems at all. However we have a ton of scripts and build tools that access the network resource by name. My sysadmin has found that using MS's dfsutil.exe, we can fix it temporary using this sequence of commands: C:\dfsutil.exe /pktinfo C:\dfsutil.exe /PktFlush C:\dfsutil.exe /SpcFlush C:\dfsutil.exe /PurgeMupCache C:\dfsutil.exe /pktinfo After that, everything is great... until I reboot, or until some unspecified time later where suddenly I don't have access to \main\ anymore. Hoping to find a more permanent solution than waiting for it to break and running a batch file.

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • What do the 4 keyboard input method systems in 10.04 mean?

    - by Android Eve
    I am trying to install another language support (in addition to the default US). Checking that language checkbox in "Install / Remove Languages..." wasn't too difficult. :) But now I want to add keyboard support, too, for that language. Again, I am prompted with a nice listbox with the following 4 options: none ibus lo-gtk th-gtk But I have no idea what these mean. I googled "ubuntu 10.04 keyboard input method system none ibus lo-gtk th-gtk" but all I could find was descriptions of problems, not an actual definition. Could you please point me to a webpage where I can learn about the meanings of these 4 different methods and +'s and -'s of each?

    Read the article

  • How and why are operating systems bootable from a USB?

    - by user114638
    I'm told to install ubuntu on my laptop for work in order to learn shell scripting. I've read the best way is to install ubuntu on a USB stick and partition my HDD. I'm curious how an OS is bootable from a USB stick? Is it literally just a small interface that can be put anywhere? This reminds me of a time I downloaded a game onto my USB stick, when I brought it to my friends house he told me it will run slow if I don't install it and only run it from the usb, is this different from running ubuntu from a usb? Will ubuntu be slow?

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Why do operating systems do low level stuff in C and C++? Why not just C++?

    - by Cole Johnson
    On the Wikipedia page for Windows, it states the Windows is written in Assembly for the bootloader and task switcher, and C and C++ for kernel routines. This confuses me because AFAIK, you can call C++ functions from an extern "C"'d block as C++ is just C with extra features (all of which can be rewritten in C if you wanted to AFAIK). I can get using C for the kernel functions so pure C apps can use them (like printf and such), but if they can just be wrapped in an extern "C " block, then why code in C? So my question is: Why would a kernel be written in both C and C++ instead of just C++

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >