Search Results

Search found 659 results on 27 pages for 'safety'.

Page 16/27 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • How is software used in critical life-or-death systems tested?

    - by waiwai933
    An airplane, as opposed to, for example, a website, is a system where any failure in certain systems is completely unacceptable, since errors in e.g. flight monitoring can cause the autopilot to malfunction and do a dive. Obviously, this doesn't happen since the brilliant engineers at Boeing and Airbus have checks in the autopilot to make sure it doesn't suddenly decide a dive is a perfectly acceptable and safe maneuver. Or perhaps the computer crashes, and the pilots in the newer fly-by-wire aircraft can no longer actually fly the plane. Of course, there are various safety procedures and redundancies built into these systems to prevent a crash (of both the software and the aircraft). However, on the other hand, it's quite obvious that software isn't perfect—both open source and closed source software do crash regularly, and only the simplest "Hello World" program doesn't fail. How can the engineers who design the software systems in the aeronautic, medical, and other life-or-death industries manage to test their software so that it doesn't fail (and if it does fail, at least fail gracefully)? I'm desperately hoping that you're not all going to go: "Oh, I work for Boeing/Airbus/(some other company) and it's not! Have fun on your next flight/hospital visit."

    Read the article

  • Integrated Reporting is Getting Closer

    - by jmorourke
    Oracle recently sponsored a webcast on CFO.com titled:  The CFO Playbook on Integrated Reporting: Integrating Sustainability into Financial Disclosures.  The speakers for this webcast were James Margolis, partner with Environmental Resources Management (ERM), a global provider of environmental, health, safety, risk and sustainability consulting services (EHSS) and Mike Wallace, Director of the Global Reporting Initiative's Focal Point USA. This webcast focused on why top companies in the U.S. and overseas are incorporating sustainability content into their annual reports and other financial disclosures. The speakers discussed the benefits of integrating sustainability reporting with traditional financial reporting. They noted how investors, corporate directors, lenders and most recently, the Securities and Exchange Commission, use this information to better understand, benchmark and value companies. They also discussed the November 2012 release of an Integrated Reporting Framework by the International Integrated Reporting Council (IIRC).  See the press release and link to the framework here.  The shift towards integrated financial and sustainability reporting is gaining momentum with a number of global stock exchanges endorsing this approach in 2012.  See the links here if you want to listen to the webcast or download the slides. Also, here is a demonstration of Oracle’s solution for integrated financial and sustainability reporting. If you’re interested in learning more about this and Oracle’s other Sustainability Reporting solutions, click here. If you have any questions or need additional information, please feel free to contact me at [email protected].

    Read the article

  • How do I backup my customer's data?

    - by marcamillion
    If you run a SaaS app, or work on one, I would love to hear from you. Where the safety and security of your customer's data is paramount, how do you secure it and back it up? I would love to know your main host (e.g. Heroku, Engine Yard, Rackspace, MediaTemple, etc.) and who you use for your backup. Be as detailed as possible - e.g. a quick overview of your service and the data you store (images for instance), what happens with the images when the user uploads them (e.g. they go to your Linode VPS, and posted to the site for them to see - then they are automatically sent to AWS or wherever, then once a week they are backed up to tape by the managed hosting provider, and you also back them up to your house/office). If you could also give some idea as to what the unit cost (per GB/per user/per month) of storage is - on average, I would really appreciate that. Getting ready to launch my app, and I would love to get some more perspective on the nitty gritty details involved. Thanks!

    Read the article

  • Oracle Exalogic Customer Momentum @ OOW'12

    - by Sanjeev Sharma
    [Adapted from here]  At Oracle Open World 2012, i sat down with some of the Oracle Exalogic early adopters  to discuss the business benefits these businesses were realizing by embracing the engineered systems approach to data-center modernization and application consolidation. Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • Java Champion Dick Wall Explores the Virtues of Scala (otn interview)

    - by Janice J. Heiss
    In a new interview up on otn/java, titled “Java Champion Dick Wall on the Virtues of Scala (Part 2),” Dick Wall explains why, after a long career in programming exploring Lisp, C, C++, Python, and Java, he has finally settled on Scala as his language of choice. From the interview: “I was always on the lookout for a language that would give me both Python-like productivity and simplicity for just writing something and quickly having it work and that also offers strong performance, toolability, and type safety (all of which I like in Java). Scala is simply the first language that offers all those features in a package that suits me. Programming in Scala feels like programming in Python (if you can think it, you can do it), but with the benefit of having a compiler looking over your shoulder and telling you that you have the wrong type here or the wrong method name there.The final ‘aha!’ moment came about a year and a half ago. I had a quick task to complete, and I started writing it in Python (as I have for many years) but then realized that I could probably write it just as fast in Scala. I tried, and indeed I managed to write it just about as fast.”Wall makes the remarkable claim that once Java developers have learned to work in Scala, when they work on large projects, they typically find themselves more productive than they are in Java. “Of course,” he points out, “people are always going to argue about these claims, but I can put my hand over my heart and say that I am much more productive in Scala than I was in Java, and I see no reason why the many people I know using Scala wouldn’t say the same without some reason.”Read the interview here.

    Read the article

  • User accounts in FTP

    - by Brad
    I have an FTP server(proftpd on debian) that I'm going to allow a couple friends access to, and I want some safety nets in place, just in case. These are some of the things I'd like to do: Jail the accounts to their home directories and impose a cap on the amount of data they can upload Allow them access to a shared folder(via symlink or something) where they have full access(Also with a storage cap, but larger) Allow my own account full access to the system(Using groups I guess) Not allow anonymous access, or allow it with its own folder, separate from the shared user folder Currently, I've got the accounts set up and jailed, but it seems like the symlink that I put in is not allowing them to visit the shared folder. I suppose this has to do with them not having read permissions anywhere but their own home directories, or maybe it's something else, I'll continue to look into it and provide any information that is requested. Is what I'm trying to do possible? Any tips or resources that you can share are appreciated. Thanks.

    Read the article

  • Oracle Solutions supporting ICAM deployments

    - by user12604761
    The ICAM architecture has become the predominant security architecture for government organizations.  A growing number of federal, state, and local organizations are in various stages of using Oracle ICAM solutions.  The relevance of ICAM has clearly extended beyond the Federal ICAM mandates to any government program that must enable standards based interoperability like health exchanges and public safety.  The state government endorsed version of ICAM was just released with the NASCIO SICAM Roadmap. ICAM solutions require an integrated security architecture.  The major new release in August of Oracle Identity Management 11gR2 focuses on a platform approach to identity management.  This makes it easier for government organizations to acquire and implement a comprehensive ICAM solution, rather than individual products.  The following analysts reports describe the value of the Oracle Solutions: According to The Aberdeen Group:  “Organizations can save up to 48% deploying a platform of  (identity management) solutions when compared to deploying point solutions” IDC Product Flash, July 2012:  “Oracle may have hit the home run grand slam in identity management recently with the announcement of Oracle Identity Management 11g R2." For additional information on the Oracle ICAM solutions, attend the Webcast on October 10, 2012:  ICAM Framework for Enabling Agile, Service Delivery. Visit the Oracle Secure Government Resource Center for information on enterprise security solutions that help government safeguard information, resources and networks.

    Read the article

  • DB auto failover in c# does not work when the principal server physically goes offline

    - by user62521
    I'm setting up DB auto failover in C# with SQL Server 2008 and I have a 'high safety with automatic failover mirror' using a witness setup and my connection string looks like "Server=tcp:DC01; Failover Partner=tcp:DC02; database=dbname; uid=sewebsite;pwd=somerndpwd;Connect Timeout=10;Pooling=True;" During testing, when I turn off the SQL Server service on the principal server the auto failover works like a charm, but if I take the principal server offline (by shutting down the server or killing the network card) auto failover does not work and my website just times out. I found this article where the second last post suggests that its because we are using named pipes which does not work when the principal goes offline, but we force TCP in our connection string. What am I missing to get this DB auto failover working?

    Read the article

  • Windows 7 unresponsive after hibernate.

    - by Simon Verbeke
    The last few days, when I get Windows out of hibernate, Windows itself gets unresponsive. I can use any of my programs that were already opened before hibernating, but the window functions (closing, changing the size etc) aren't reacting. I can't open any new programs, except for the Task Manager. Also, I don't get an Internet connection. When I use Ctrl + Alt + Del, I get either a black screen, or the regular background. Then it hangs and after a while shows me a message that Windows failed starting up that "thing"(don't know what it's called in English, something with safety, etc). I have Windows 7 Ultimate installed.

    Read the article

  • Uses of persistent data structures in non-functional languages

    - by Ray Toal
    Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are thread-safe. However, the reason that persistent data structures are thread-safe is that if one thread were to "add" an element to a persistent collection, the operation returns a new collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are not in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for "thread safety"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • How to refactor a myriad of similar classes

    - by TobiMcNamobi
    I'm faced with similar classes A1, A2, ..., A100. Believe it or not but yeah, there are roughly hundred classes that almost look the same. None of these classes are unit tested (of course ;-) ). Each of theses classes is about 50 lines of code which is not too much by itself. Still this is way too much duplicated code. I consider the following options: Writing tests for A1, ..., A100. Then refactor by creating an abstract base class AA. Pro: I'm (near to totally) safe by the tests that nothing goes wrong. Con: Much effort. Duplication of test code. Writing tests for A1, A2. Abstracting the duplicated test code and using the abstraction to create the rest of the tests. Then create AA as in 1. Pro: Less effort than in 1 but maintaining a similar degree of safety. Con: I find generalized test code weird; it often seems ... incoherent (is this the right word?). Normally I prefer specialized test code for specialized classes. But that requires a good design which is my goal of this whole refactoring. Writing AA first, testing it with mock classes. Then inheriting A1, ..., A100 successively. Pro: Fastest way to eliminate duplicates. Con: Most Ax classes look very much the same. But if not, there is the danger of changing the code by inheriting from AA. Other options ... At first I went for 3. because the Ax classes are really very similar to each other. But now I'm a bit unsure if this is the right way (from a unit testing enthusiast's perspective).

    Read the article

  • How to save a document and close it after the save

    - by Le Questione
    Currently I have a .bat file that opens a program, opens a document and then reloads this document so it contains new data. After that I need a timeout so it can run some other things within the document which takes about 2 minutes in total, but for safety I used 5 minutes. Then I want it to save the document and close the program. I've searched a lot for this but I can't find anything that saves the document and then closes it. Only can find how to save a bat file, but I already have that! CD /D "D:\01 - Config\02 - Developer" start qv.exe /l "D:\02 - Content\07 - Accesspoint\tracktrace.qvw" timeout /t 300 ???? exit 0 Please advise, thanks

    Read the article

  • Test-Driven Development with plain C: manage multiple modules

    - by Angelo
    I am new to test-driven development, but I'm loving it. There is, however, a main problem that prevents me from using it effectively. I work for embedded medical applications, plain C, with safety issues. Suppose you have module A that has a function A_function() that I want to test. This function call a function B_function, implemented in module B. I want to decouple the module so, as James Grenning teaches, I create a Mock module B that implements a mock version of B_function. However the day comes when I have to implement module B with the real version of B_function. Of course the two B_function can not live in the same executable, so I don't know how to have a unique "launcher" to test both modules. James Grenning way out is to replace, in module A, the call to B_function with a function pointer that can have the value of the mock or the real function according to the need. However I work in a team, and I can not justify this decision that would make no sense if it were not for the test, and no one asked me explicitly to use test-driven approach. Maybe the only way out is to generate different a executable for each module. Any smarter solution? Thank you

    Read the article

  • Integrated Reporting Is Getting Closer

    - by Evelyn Neumayr
    By John O’Rourke, Vice President, Product Marketing, Oracle Oracle recently sponsored a webcast on CFO.com titled:  The CFO Playbook on Integrated Reporting: Integrating Sustainability into Financial Disclosures which focused on why top companies in the U.S. and overseas are incorporating sustainability content into their annual reports and other financial disclosures.  The webcast speakers, James Margolis, partner with Environmental Resources Management (ERM), a global provider of environmental, health, safety, risk and sustainability consulting services (EHSS) and Mike Wallace, Director of the Global Reporting Initiative's Focal Point USA, discussed the benefits of integrating sustainability reporting with traditional financial reporting. They noted how investors, corporate directors, lenders and most recently, the Securities and Exchange Commission, use this information to better understand, benchmark and value companies. They also talked about the November 2012 release of an Integrated Reporting Framework by the International Integrated Reporting Council (IIRC).  Read the press release and link to the framework here.  The shift towards integrated financial and sustainability reporting is gaining momentum with a number of global stock exchanges endorsing this approach in 2012.  Visit these links to listen to the webcast and download the slides. You can also view a demonstration of Oracle's solution for integrated financial and sustainability reporting. If you’re interested in learning more about this and Oracle’s other sustainability reporting solutions, click here. If you have any questions or need additional information, please feel free to contact me at [email protected].

    Read the article

  • Backing up Windows 2003 Server that has Certificate Authority

    - by Dina
    I want to export and migrate a Certificate Authority CA role from a Windows 2003 machine to a new copy of Windows 2008 R2 virtual machine. I was told that I cannot have 2 CA roles on the same network at the same time. Therefore, I must first export the certificates on the older machine, delete the CA role, then add the CA role on the new machine and import the certificates into it. As a safety precaution, I am tasked to find a backup solution in case this does not work and I need to revert back to the old Windows 2003 CA. My question is: What is the best software for doing this type of backup? I am currently trying out Symantec Backup Exec 2012. Which I hope will allow me to create a backup prior to removing CA role on Windows 2003. If this CA migration fails, the backup will allow me to revert the old machine to a time before I removed its CA role.

    Read the article

  • How do I rescue files from the encrypted home folder via live USB stick?

    - by Alexia
    I know, this has been asked and answered all over the internet already. However, I start feeling stupid, since the informations there are not helping me. Just this morning, I wanted to install the newest update to 13.10. After the download, when it came to the actual installing, the install program froze and didn't do anything for hours. At that time, I was still logged in. The computer was working and everything was accessable to me. However, I made the mistake and didn't immediately make safety copies of everything. Instead, I just rebooted. Long story short: My computer even fails to reset to a previous version via Grub. But I am able to boot from a USB stick and, after starting Nautilus, I see my home folder on the HD. I would now like to copy its contents onto an external harddisk. Problem 1: I have no rights to access the folder like that. Problem 2: It is encrypted. Problem 3: I don't know how to give myself the rights to access the folder nor do I know how to encrypt it. I assume that it might help that I still know these things: - my old login name - my old login phrase - a 32 characters long string of hexadecimal numbers that I copied to my list of passwords as "Ubuntu Encryption Code". I copied it digitally right after installing Ubuntu the first time and encrypting the home folder, so there won't be any typos. I am sure of that. The solutions that I saw so far, tell me that I need the "encryption phrase". But when I follow the instructions and use this phrase that I have in my list, I only get messages of denial. Can anyone help me through this special problem, please?

    Read the article

  • Building a network at home, what cables to use (if any)?

    - by Faruz
    My house is currently in ruins and am building it. While doing so, I wanted to design a home network. My main objectives are surfing and HD streaming. The house is one-level, 100 sq/m (about 300 sq/ft), and one of the rooms is a safety room with Reinforced concrete walls. About a year ago, when I started planning, I thought about putting Cat 6 STP cables in the walls and create network points in the rooms. Should I use STP or FTP? I heard that STP is a problem regarding connectors and stuff. Is it really beneficial? Will it work OK if I transfer the wire together with the telephone line? Should I maybe go with WLan and count on 802.11n to enable me to stream HD across the house? is 802.11n that good?

    Read the article

  • I just recursively chmod'd everything under / to 750. Any tips?

    - by Ouairz
    I won't be the first and I won't be the last, I suppose. While playing around with the find command, I made a whoops and it would appear that instead of changing the permissions of the ~/web directory to 750, it changed the permissions of the entire filesystem (/) to 750, however I'm not certain, but any attempt to investigate is thwarted by Permission denied messages. For everything. This was the offending command: sudo find ~/web . type d -exec chmod 750 {} If I'm not mistaken, the Ubuntu team disabled root logins as a safety precaution so I'm out of ideas. I'm (obviously) a total newbie when it comes to file permissions so I was wondering if anyone had some good or even some bad advice to share. I've mentally prepped myself to losing everything on the computer which is only of mild consequence, since I have backups, but I did do a bit of work on this box over the week and it would be a shame to lose it all due to a boneheaded mistake. If you are reading this message, ask yourself, have you backed up any of your work recently? Thanks in advance for any insights. Feel free to scold me for using sudo carelessly

    Read the article

  • Should business services cross bounded contexts?

    - by Paul T Davies
    Firstly, I am following the convention that a bounded context is synonymous to a department, or possibly one department has 1 to many bounded contexts. We have a client consultancy department that has a Documentation Service. Documents are stored in the Document Store Service (which is where all documents in the company are stored - it is a utility service), and the Documentation Service stores information about that document (a business service). As it was designed for the client consultancy, it is information relevant to them. Now health and safety need somewhere to store information about a document. This is different information to client consultancy, but I have been instructed to extend the existing service to account for this extra information. I feel this service is now crossing a bounded context. My worry is that all departments will eventually store there information in here and the service will become bloated, trying to be all things to all departments. Each document record will only store a subset of the information because it will only belong to one department. It will get worse when different departments want to store the same information but refer to it in a diferent ways, or when two departments want to store different information that they refer to in the same way. In my understanding, this is exactly the reason for bounded contexts. I feel each department should have it's own business service for information about a document, but use the same utility service to actually store the document. What would be the correct approach?

    Read the article

  • Does it make sense to write build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • How to track my kid's Multi-Browser website history

    - by Rachel
    My kid is 14, homeschooled and capable computer user. All of her school work is done online; the computer literacy course even uses 3 different browsers, Chrome, IE, and Firefox. I have 2 laptops, running Win7 (one is Pro, the other Home). Is there a way to get all of her browser history in one place? I have to account for 60 minutes of class time per subject per day, but trying to do this across all 3 is getting too complicated. Thanks! PS. My kid and I talk about internet safety and usage regularly and she knows that I monitor where she goes and how long she is there. Secrecy is not an issue.

    Read the article

  • Multi Threading - How to split the tasks

    - by Motig
    if I have a game engine with the basic 'game engine' components, what is the best way to 'split' the tasks with a multi-threaded approach? Assuming I have the standard components of: Rendering Physics Scripts Networking And a quad-core, I see two ways of multi-threading: Option A ('Vertical'): Using this approach I can allow one core for each component of the engine; e.g. one core for the Rendering task, one for the Physics, etc. Advantages: I do not need to worry about thread-safety within each component I can take advantage of special optimizations provided for single-threaded access (e.g. DirectX offers a flag that can be set to tell it that you will only use single-threading) Option B ('Horizontal'): Using this approach, each task may be split up into 1 <= n <= numCores threads, and executed simultaneously, one after the other. Advantages: Allows for work-sharing, i.e. each thread can take over work still remaining as the others are still processing I can take advantage of libraries that are designed for multi-threading (i.e. ... DirectX) I think, in retrospect, I would pick Option B, but I wanted to hear you guys' thoughts on the matter.

    Read the article

  • Remember me or not?

    - by taeja87
    I was told to post this on webmasters instead of stackoverflow. Is it safe to have the remember me feature? Would it be somewhat safe (knowing it won't be 100% safe) to allow users to close their browser and come back still logged in? I am not exacting sure which way I should go after reading different things about safety. I learned about session fixation and implemented security to add more protection. From experience, if remember me is checked then only your username/email appears and requires you to re-enter your password. Other sites allow you to come in and out as much as you way without logging out after the browser has closed. If it is safe, what is the current best way of implementing remember/stay logged in? http://stackoverflow.com/questions/3531377/best-practise-for-remember-me-feature http://stackoverflow.com/questions/5087969/what-is-the-code-for-stay-logged-in-or-remember-me-while-user-login-in-php http://bytes.com/topic/php/answers/881197-stay-logged-remember-me-php-sessions-cookies http://security.stackexchange.com/questions/41/good-session-practices Also: The site I am working on is email & password login type.

    Read the article

  • IPTables: allow SSH access only, nothing else in or out

    - by Disco
    How do you configure IPTables so that it will only allow SSH in, and allow no other traffic in or out? Any safety precautions anyone can recommend? I have a server that I believe has been migrated away from GoDaddy successfully and I believe is no longer in use. But I want to make sure just because ... you never know. :) Note that this is a virtual dedicated server from GoDaddy... That means no backup and virtually no support.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >