Search Results

Search found 6992 results on 280 pages for 'engineered systems'.

Page 7/280 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Interacting with clients using project management systems

    - by Keyo
    I work in web development, that involves a lot of smaller custom projects rather than one large product. Requirements and specifications are always coming from outside the company. We've setup a ticket tracking system (Active Collab, which is rubbish compared to redmine btw) and given access to clients so they can submit issues. The idea being that less time is taken up with long phone conversations and emails. I think it can work really well if done right. However I'm not so sure it's always a good thing. Feature requests have gone up a lot on some projects. The system also needs to be friendly to non-developers while having the many features that developers use. Developers' tickets do not always map 1-to-1 with the tickets clients will create. So the requirements and broader tickets need to be separated from the more specific developer (specification) related tickets. Perhaps we could use two systems, one for clients to submit their requirements or describe a bug, and one for developers to create tickets like implement method x in class y. Maybe this can be achieved by structuring tickets into more appropriate categories or creating sub-tickets under a feature request ticket. I've briefly looked into Pivotal Tracker and it has a fundamentally different workflow. I would like to know how others are communicating with clients and keeping the technical workflow separate from the non-technical workflow. What tools do you use and how do you use them?

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    You're working in IT and not using any kind of version control system? Sorry, then you're doing something wrong! RSVP for MSCC meetup of June This month's meetup will be an introduction into the mechanics of version control systems (VCS) like git, Mercurial, TFS, and others in general. VCS are not optional but compulsory in any area of IT. Whether you're developing source code for the next buzz app, writing SQL scripts for your database, or automating your administrative tasks with shell scripts it's better to have a "time machine" in order to keep multiple version, stay organised and leverage the power of differences. git - a modern approach to VCS - Nayar Nayar is going to give us a brief overview of the basic principles of working with git. Which are the necessary steps to get started and which are the usual commands in order to get the most out of git. Visual Studio Online (VSO) - Jochen Are you mainly rooted on the Windows platform and looking for a good alternative to Team Foundation Services (TFS), then VSO might give you hand at achieving this. Similar to git VSO is an open infrastructure but plays very well together with the Microsoft Azure cloud infrastructure. Recent and upcoming events in Mauritius Let's have a chat about recent events like WebCup 2014 or Emtel Knowledge Series and have a head start on upcoming events like Code Challenge, and others to come. Networking and general discussions Of course, there will be plenty of time to chat and exchange with other like-minded craftsmen. Bring your topics and discuss various issues with other professionals. Share your experience and use the ability to learn from others. Looking forward to meet soon.

    Read the article

  • Examples of permission-based authorization systems in .Net?

    - by Rachel
    I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed....) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded.

    Read the article

  • Oracle Systems and Solutions at CloudExpo NY 2012

    - by ferhat
    Oracle's Larry Ellison and Mark Hurd just unveiled industy's broadest cloud strategy on June 6, with services based on industry standards, with 100+ enterprise applications live in the Cloud today!  The broadest strategy to support your journey along the cloud in any path chose, at any pace your business require and need. This is great assurance for your journey into the clouds as it is, at the same time, quite a temptation, don't you think? We will be at the Cloud Expo Conference to take place June 11-14 in New York. Oracle is Platinum Plus sponsor of 10th International  Cloud Computing Conference & Expo 2012 East. Oracle is also glad to offer complimentary VIP Gold Passes to the conference. We wish everyone a great and productive time with all  the fellow cloudsters.  We, the systems solutions group at Oracle, have prepared Oracle Optimized Solution for Enterprise Cloud Infrastructure to help you start your Infrastructure-as-a-Service with ease, confidence, speed, and savings.  In this solution we are now bringing together the power of Oracle Solaris and SPARC T4 servers. We will be at the Cloud Bootcamp on Wednesday June 13th discussing how this combination can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. We will also be at the Expo floor #511 throughout the Cloud Expo conference. Join us for the keynote, general session, and technical sessions with Oracle: Keynote Session: A Pragmatic Journey to the Cloud , Tuesday, June 12, 2012 General Session: Oracle Cloud - An Enterprise Cloud for Business-Critical Applications , Monday, June 11, 2012 Conference Session: Accelerate Enterprise Cloud Deployment and Gain Total Cloud Control, Monday, June 11, 2012 Conference Session: The Java EE 7 Platform: Developing for the Cloud, Monday, June 11, 2012 Conference Session: Integrating Big Data into Your Data Center: A Big Data Reference Architecture, Monday, June 11, 2012 Conference Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder, Tuesday, June 12, 2012 Conference Session: Building a Private, Public, or Hybrid Cloud? Simplify Your Cloud with Oracle’s Complete Cloud Solution,Tuesday, June 12, 2012 Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, Wednesday, June 13, 2012

    Read the article

  • Introduction to Lean Software Development and Kanban Systems

    - by Ben Griswold
    Last year I took myself through a crash course on Lean Software Development and Kanban Systems in preparation for an in-house presentation.  I learned a bunch.  In this series, I’ll be sharing what I learned with you.   If your career looks anything like mine, you have probably been affiliated with a company or two which pushed requirements gathering and documentation to the nth degree. To add insult to injury, they probably added planning process (documentation, requirements, policies, meetings, committees) to the extent that it possibly retarded any progress. In my opinion, the typical company resembles the quote from Tom DeMarco. It isn’t enough just to do things right – we also had to say in advance exactly what we intended to do and then do exactly that. In the 1980s, Toyota turned the tables and revolutionize the automobile industry with their approach of “Lean Manufacturing.” A massive paradigm shift hit factories throughout the US and Europe. Mass production and scientific management techniques from the early 1900’s were questioned as Japanese manufacturing companies demonstrated that ‘Just-in-Time’ was a better paradigm. The widely adopted Japanese manufacturing concepts came to be known as ‘lean production’. Lean Thinking capitalizes on the intelligence of frontline workers, believing that they are the ones who should determine and continually improve the way they do their jobs. Lean puts main focus on people and communication – if people who produce the software are respected and they communicate efficiently, it is more likely that they will deliver good product and the final customer will be satisfied. In time, the abstractions behind lean production spread to logistics, and from there to the military, to construction, and to the service industry. As it turns out, principles of lean thinking are universal and have been applied successfully across many disciplines. Lean has been adopted by companies including Dell, FedEx, Lens Crafters, LLBean, SW Airlines, Digital River and eBay. Lean thinking got its name from a 1990’s best seller called The Machine That Changed the World : The Story of Lean Production. This book chronicles the movement of automobile manufacturing from craft production to mass production to lean production. Tom and Mary Poppendieck, that is.  Here’s one of their books: Implementing Lean Software Thinking: From Concept to Cash Our in-house presentations are supposed to run no more than 45 minutes.  I really cranked and got through my 87 slides in just under an hour. Of course, I had to cheat a little – I only covered the 7 principles and a single practice. In the next part of the series, we’ll dive into Principle #1: Eliminate Waste. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • How do I change my resolution to 1600*900 for a wide screen monitor?

    - by Madhu
    How do I change my resolution to 1600*900 for a wide screen monitor in Oneiric? My Hardware configuration is as below: madhu@madhu-Home:~$ lspci 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] AGP Port (virtual PCI-to-PCI bridge) 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller 00:04.0 Ethernet controller: Silicon Integrated Systems [SiS] 191 Gigabit Ethernet Adapter (rev 02) 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller 00:1f.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 01:00.0 VGA compatible controller: Silicon Integrated Systems [SiS] 771/671 PCIE VGA Display Adapter (rev 10) madhu@madhu-Home:~$ cat /etc/X11/xorg.conf

    Read the article

  • Error in MySQL Workbench Forward Engineered Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • Strategy to use two different measurement systems in software

    - by Dennis
    I have an application that needs to accept and output values in both US Custom Units and Metric system. Right now the conversion and input and output is a mess. You can only enter in US system, but you can choose the output to be US or Metric, and the code to do the conversions is everywhere. So I want to organize this and put together some simple rules. So I came up with this: Rules user can enter values in either US or Metric, and User Interface will take care of marking this properly All units internally will be stored as US, since the majority of the system already has most of the data stored like that and depends on this. It shouldn't matter I suppose as long as you don't mix unit. All output will be in US or Metric, depending on user selection/choice/preference. In theory this sounds great and seems like a solution. However, one little problem I came across is this: There is some data stored in code or in the database that already returns data like this: 4 x 13/16" screws, which means "four times screws". I need the to be in either US or Metric. Where exactly do I put the conversion code for doing the conversion for this unit? The above already mixing presentation and data, but the data for the field I need to populate is that whole string. I can certainly split it up into the number 4, the 13/16", and the " x " and the " screws", but the question remains... where do I put the conversion code? Different Locations for Conversion Routines 1) Right now the string is in a class where it's produced. I can put conversion code right into that class and it may be a good solution. Except then, I want to be consistent so I will be putting conversion procedures everywhere in the code at-data-source, or right after reading it from the database. The problem though is I think that my code will have to deal with two systems, all throughout the codebase after this, should I do this. 2) According to the rules, my idea was to put it in the view script, aka last change to modify it before it is shown to the user. And it may be the right thing to do, but then it strikes me it may not always be the best solution. (First, it complicates the view script a tad, second, I need to do more work on the data side to split things up more, or do extra parsing, such as in my case above). 3) Another solution is to do this somewhere in the data prep step before the view, aka somewhere in the middle, before the view, but after the data-source. This strikes me as messy and that could be the reason why my codebase is in such a mess right now. It seems that there is no best solution. What do I do?

    Read the article

  • Partner Webcast - Oracle Data Integration Competency Center (DICC): A Niche Market for services

    - by Thanos Terentes Printzios
    Market success now depends on data integration speed. This is why we collected all best practices from the most advanced IT leaders, simply to prove that a Data Integration competency center should be the primary new IT team you should establish. This is a niche market with unlimited potential for partners becoming, the much needed, data integration services provider trusted by customers. We would like to elaborate with OPN Partners on the Business Value Assessment and Total Economic Impact of the Data Integration Platform for End Users, while justifying re-organizing your IT services teams. We are happy to share our research on: The Economical impact of data integration platform/competency center. Justifying strongest reasons and differentiators, using numeric analysis and best-practice in customer case studies from specific industries Utilizing diagnostics and health-check analysis in building a business case for your customers What exactly is so special in the technology of Oracle Data Integration Impact of growing data volume and amount of data sources Analysis of usual solutions that are being implemented so far, addressing key challenges and mistakes During this partner webcast we will balance business case centric content with extensive numerical ROI analysis. Join us to find out how to build a unified approach to moving/sharing/integrating data across the enterprise and why this is an important new services opportunity for partners. Agenda: Data Integration Competency Center Oracle Data Integration Solution Overview Services Niche Market For OPN Summary Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Milomir Vojvodic, EMEA Senior Business Development Manager for Oracle Data Integration Product Group Date: Thursday, September 4th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Today For any questions please contact us at [email protected]

    Read the article

  • Exadata - Following up on customer deployments

    - by Carlos M. Orozco -Oracle
    Over the last year or so I've been visiting customers who have had Exadata deployed and have been enjoying the benefits the platform has been providing. Benefits include greater performance, consolidating multiple databases, data compression and time to value improvements. Most often I hear my reports run faster. One hospitality company report times that used to take 3 hrs now run in 12 seconds. Another services company reported all their batch reports taking 11hrs now run in 38 mins. Also reported that their transactions post faster, and batch updates run faster. So what does that mean? For most of them it means that now they have a platform that can handle growth. Most are growing 15% organically, but I've also seen 40% growth thru acquisition. Exadata has been keeping up with the additional data demand by customers leveraging compression and the smart storage features.

    Read the article

  • What are the functionalities of Distributed File systems and Distributed Storage Systems?

    - by Berkay
    i'm reading cloud vendors solutions for the distributed storage systems such as Amazon Dynamo and Google Big Table. and really confused in two terms : what is Distrubuted file systems for in cloud ? what is Distributed storage systems for? what are differences of these terms and functionalities ? if i understand these terms i will create the general architecture of the cloud vendors, any good tutorial or web page will be appreciated. Thanks

    Read the article

  • Security for university research lab systems

    - by ank
    Being responsible for security in a university computer science department is no fun at all. And I explain: It is often the case that I get a request for installation of new hw systems or software systems that are really so experimental that I would not dare put them even in the DMZ. If I can avoid it and force an installation in a restricted inside VLAN that is fine but occasionally I get requests that need access to the outside world. And actually it makes sense to have such systems have access to the world for testing purposes. Here is the latest request: A newly developed system that uses SIP is in the final stages of development. This system will enable communication with outside users (that is its purpose and the research proposal), actually hospital patients not so well aware of technology. So it makes sense to open it to the rest of the world. What I am looking for is anyone who has experience with dealing with such highly experimental systems that need wide outside network access. How do you secure the rest of the network and systems from this security nightmare without hindering research? Is placement in the DMZ enough? Any extra precautions? Any other options, methodologies?

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

  • Interested in embedded systems. Where to begin?

    - by Ala ABUDEEB
    Hello, i'm a computer systems engineering student. i'm interested in designing embedded systems but i don't know where to begin learning this, and what topics are essentially needed to proceed in this domain. So can you please tell me what topics do i have to study, and what books are available there in market or online that can help me??? please help me p.s. normally as an engineering student i have basic knowledge of circuit theory and microcontroller realm. thank you in advance.

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • What are the advantages of version control systems that version each file separately?

    - by Mike Daniels
    Over the past few years I have worked with several different version control systems. For me, one of the fundamental differences between them has been whether they version files individually (each file has its own separate version numbering and history) or the repository as a whole (a "commit" or version represents a snapshot of the whole repository). Some "per-file" version control systems: CVS ClearCase Visual SourceSafe Some "whole-repository" version control systems: SVN Git Mercurial In my experience, the per-file version control systems have only led to problems, and require much more configuration and maintenance to use correctly (for example, "config specs" in ClearCase). I've had many instances of a co-worker changing an unrelated file and breaking what would ideally be an isolated line of development. What are the advantages of these per-file version control systems? What problems do "whole-repository" version control systems have that per-file version control systems do not?

    Read the article

  • What are the advantages of version control systems that version each file separately?

    - by Mike Daniels
    Over the past few years I have worked with several different version control systems. For me, one of the fundamental differences between them has been whether they version files individually (each file has its own separate version numbering and history) or the repository as a whole (a "commit" or version represents a snapshot of the whole repository). Some "per-file" version control systems: CVS ClearCase Visual SourceSafe Some "whole-repository" version control systems: SVN Git Mercurial In my experience, the per-file version control systems have only led to problems, and require much more configuration and maintenance to use correctly (for example, "config specs" in ClearCase). I've had many instances of a co-worker changing an unrelated file and breaking what would ideally be an isolated line of development. What are the advantages of these per-file version control systems? What problems do "whole-repository" version control systems have that per-file version control systems do not?

    Read the article

  • REGISTER NOW! ORACLE HARDWARE SALES TRAINING: HARDWARE AND SOFTWARE - ENGINEERED TO BE SOLD TOGETHER!

    - by mseika
    REGISTER NOW!ORACLE HARDWARE SALES TRAINING: HARDWARE AND SOFTWARE - ENGINEERED TO BE SOLD TOGETHER! Dear partner You can now register for Oracle's EMEA Hardware Sales Training Roadshow: "Hardware and Software - Engineered to be sold together!"The objective of this one-day, face-to-face, free of charge training session is to share with you and your Oracle peers the latest information on Oracle's products and solutions and to ensure that you are fully equipped to position and sell Oracle's integrated stack. Please find the agenda, schedule details and registration information here.The seats are limited and available on a first-come-first-serve basis. We recommend you to register yourself as early as possible and reserve your seat.Register Now We hope you will take the maximum advantage of these great learning and networking opportunities and look forward to welcoming you to your nearest event! Best regards, Giuseppe FacchettiPartner Business Development Manager,Servers, Oracle EMEA Sasan MoaveniStorage Partner Sales ManagerOracle EMEA

    Read the article

  • REGISTER NOW! ORACLE HARDWARE SALES TRAINING: HARDWARE AND SOFTWARE - ENGINEERED TO BE SOLD TOGETHER!

    - by mseika
    REGISTER NOW!ORACLE HARDWARE SALES TRAINING: HARDWARE AND SOFTWARE - ENGINEERED TO BE SOLD TOGETHER! Dear partner You can now register for Oracle's EMEA Hardware Sales Training Roadshow: "Hardware and Software - Engineered to be sold together!"The objective of this one-day, face-to-face, free of charge training session is to share with you and your Oracle peers the latest information on Oracle's products and solutions and to ensure that you are fully equipped to position and sell Oracle's integrated stack. Please find the agenda, schedule details and registration information here.The seats are limited and available on a first-come-first-serve basis. We recommend you to register yourself as early as possible and reserve your seat.Register Now We hope you will take the maximum advantage of these great learning and networking opportunities and look forward to welcoming you to your nearest event! Best regards, Giuseppe FacchettiPartner Business Development Manager,Servers, Oracle EMEA Sasan MoaveniStorage Partner Sales ManagerOracle EMEA

    Read the article

  • REGISTER NOW! ORACLE HARDWARE SALES TRAINING: HARDWARE AND SOFTWARE - ENGINEERED TO BE SOLD TOGETHER!

    - by mseika
    REGISTER NOW!ORACLE HARDWARE SALES TRAINING: HARDWARE AND SOFTWARE - ENGINEERED TO BE SOLD TOGETHER! Dear partner You can now register for Oracle's EMEA Hardware Sales Training Roadshow: "Hardware and Software - Engineered to be sold together!"The objective of this one-day, face-to-face, free of charge training session is to share with you and your Oracle peers the latest information on Oracle's products and solutions and to ensure that you are fully equipped to position and sell Oracle's integrated stack. Please find the agenda, schedule details and registration information here.The seats are limited and available on a first-come-first-serve basis. We recommend you to register yourself as early as possible and reserve your seat.Register Now We hope you will take the maximum advantage of these great learning and networking opportunities and look forward to welcoming you to your nearest event! Best regards, Giuseppe FacchettiPartner Business Development Manager,Servers, Oracle EMEA Sasan MoaveniStorage Partner Sales ManagerOracle EMEA

    Read the article

  • Comparison of Code Review Tools/Systems

    - by SytS
    There are a number of tools/systems available aimed at streamlining and enhancing the code review process, including: CodeStriker Review Board, code review system in use at VMWare Code Collaborator, commercial product by SmartBear Rietveld, based on Modrian, the code review system in use at Google Crucible, commercial product by Atlassian These systems all have varying feature sets, and differ in degrees of maturity and polish; the selection is a little bewildering for someone who is evaluating code review systems for the frist time. Some of these tools have already been mentioned in other questions/answers on StackOverflow, but I would like to see a more comprehensive comparison of the more popular systems, especially with respect to: integration with source control systems integration with bug tracking systems supported workflow (reviews pre/post commit, review or contiguous/non-contigous revision ranges, etc) deployment/maintenance requirements

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of June 5

    - by Roxana Babiciu
    Smartsoft's OCEAN Payment Processing Solution achieves Oracle Exadata Optimized status. "Performance is the most important issue for our success in the market and running OCEAN on the Oracle Exadata Database Machine provides customers with extreme performance.” – Learn more Banking solution FORBIS Ltd’s FORPOST achieves Oracle Exadata, Exalogic and SuperCluster Ready Status. “We are glad to offer our current and future customers the newest features provided by Oracle Engineered Systems to achieve maximum reliability and speed operation.” – Learn more

    Read the article

  • Integrating HP Systems Insight Manager into an existing environment

    - by ewwhite
    I'm working with an environment that spans multiple data centers/sites and consists primarily of HP ProLiant servers (G5-G7) running Linux. The mix is 30% RHEL/CentOS, the rest are Gentoo :(. I also have a few dozen virtual machines running back-office and Windows servers on VMWare ESX hosts. I run OpenNMS to pull SNMP data from the various server nodes and networking devices. While OpenNMS works wonderfully for up/down, thresholds and notifications, it's native handling of traps is a little rough and the graphs are not particularly pretty. I use Orca/RRD graphs for performance trending and nice graphs. I'm tasked with inventorying the environment and wanted to come up with a clean way to organize server information. Since my environment is mostly HP, I've been playing with HP Systems Insight Manager as a way to extract server data and to deploy HP health/monitoring packages and firmware. The Gentoo systems eventually have to be converted to CentOS, so getting a quick assessment of what hardware is where would be great. Although I've read through a few hundred pages of HP manuals, I'm having a difficult time understanding how to get HP SIM to do what I want, though. My main problems are: I have about 40 subnets to deal with; 98% connected with private lines to facilities across the globe. I don't want to initiate an HP SIM discovery only to pull back every piece of intermediate networking hardware and equipment from all of the locations. I'd like this to focus on the servers. I have OpenNMS configured to accept traps. I don't want HP SIM to duplicate that effort. It seems like the built-in software deployment tool wants to overwrite the trapsink parameters for the systems it encounters during discovery. I have about 10 administrative username/password combinations in use across this infrastructure. Is there a more efficient way to get HP SIM to do the discovery or break discovery into manageable chunks? In terms of general workflow, do people typically install the HP Management Agents during the initial OS deployment (e.g. kickstart post script) or afterwards from HP SIM? Is HP SIM too thick/fat to be an inventory tool? I can't tell if it's meant to be used standalone or alongside other monitoring products. Since the majority of the systems I'm trying to track are those running Gentoo (in order to plan the move to CentOS), is there any way for HP SIM to extract system model information from them ( like dmidecode)? I have systems here where I may have an SSH key established, but not direct user or login access. Is there a way for me to import an SSH private/public key pair into HP SIM to reach out to the servers that can't accept standard credentials? There are a handful of sites where I have inconsistent access or have a double-NAT situation. I may be able to poke a server, but it may not be able to find its way back to the management system. Is there a workaround for this? The certificate configuration for HP SIM seems complicated. What is the preferred setup for trust between systems? I'd also appreciate any notes or recommendations to using this product. Or if there's a better way to do this, I'd like to know.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >