Search Results

Search found 4363 results on 175 pages for 'requirements gathering'.

Page 96/175 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • Architectural approaches to creating a game menu/shell overlay on PC/Linux?

    - by Ghopper21
    I'm am working on a collection of games for a custom digital tabletop installation (similar to Microsoft Surface tables). Each game will be an individual executable that runs full-screen. In addition, there needs to be a menu/shell overlay program running simultaneously. The menu/shell will allow users to pause games, switch to other games, check their game history, etc. Some key requirements of the shell: it intercepts all user input (mainly multitouch) first before passing it on to the currently running game (so that it can, for instance, know to pop-up at a "pause" command); can reveal on arbitrary portions of the screen, with the currently running (but presumably paused) game still showing underneath, ideally with its shape/size being dynamic, to allow for creation of an animated in/out drawer effect over the game. I'm currently looking into different architectural approaches to this problem, including Fraps and DirectX overlays, but I'm sure I'm missing some ways to think about this. What are the main approaches I should be considering? (Note the table is currently being run by Windows PC, but it could potentially be a Linux box instead.)

    Read the article

  • Kickstarter and 2D smartphone games

    - by mm24
    I am about to launch a Kickstarter project as, after 14 months of full time development on my first iOS game, I run out of money. I developed an iOS game that needs few more months to be ready (the game structure is there but haven't yet worked on balancing the difficulty of the various levels). I have a feeling that most of the computer games founded on Kickstarter are for console, PC or Mac and not for smartphones. The category that many people seem to like is RPG style games. I have done tons of work over a year and collaborated with musicians and illustrators to get top quality graphics and music. The game looks cool to be an iOS 2D game but, compared to what I've seen on Kickstarter, I feel so little and humbled. I have searched for smartphone game projects on Kickstarter but haven't found many. I believe that the reason is that people are not keen in backing an APP that is normally sold for 0.99$ as they perceive is not something big. Am I the only one having this feeling? Could anyone please share a list of references to some successfully backed kickstarter smartphone game projects? (In this way the question will not become a "chat" and will fulfill the requirements to be a gamedev question). Any other article or authoritative answer will be welcome.

    Read the article

  • How to cut the line between quality and time?

    - by m3th0dman
    On one hand, I have been taught by various software engineering books ([1] as example) that my job as a programmer is to make the best possible software: great design, flexibility, to be easily maintained etc. One the other hand although I realize that I actually write software for money and not for entertainment, although is very nice to write good code and plan ahead and refactor after writing and ... I wonder if it is always best for the business (after all we should be responsible). Is the business always benefiting from a best code? Maybe I'm over-engineering something, and it's not always useful? So how should I know when to stop in the process to achieving the best possible code? I am sure that experience is something that makes a difference here, but I believe this cannot be the only answer. [1] Uncle Bob's in Clean Code says at page 6 about the fact that: They [managers] may defend the schedule and requirements with passion; but that’s their job. It’s your job to defend the code with equal passion.

    Read the article

  • Is this table replicated?

    - by fatherjack
    Another in the potentially quite sporadic series of I need to do ... but I cant find it on the internet. I have a table that I think might be involved in replication but I don't know which publication its in... We know the table name - 'MyTable' We have replication running on our server and its replicating our database, or part of it - 'MyDatabase'. We need to know if the table is replicated and if so which publication is going to need to be reviewed if we make changes to the table. How? USE MyDatabase GO /* Lots of info about our table but not much that's relevant to our current requirements*/ SELECT * FROM sysobjects WHERE NAME = 'MyTable' -- mmmm, getting there /* To quote BOL - "Contains one row for each merge article defined in the local database. This table is stored in the publication database.replication" interesting column is [pubid] */ SELECT * FROM dbo.sysmergearticles AS s WHERE NAME = 'MyTable' -- really close now /* the sysmergepublications table - Contains one row for each merge publication defined in the database. This table is stored in the publication and subscription databases. so this would be where we get the publication details */ SELECT * FROM dbo.sysmergepublications AS s WHERE s.pubid = '2876BBD8-3D4E-4ED8-88F3-581A659E8144' -- DONE IT. /* Combine the two tables above and we get the information we need */ SELECT s.[name] AS [Publication name] FROM dbo.sysmergepublications AS s INNER JOIN dbo.sysmergearticles AS s2 ON s.pubid = s2.pubid WHERE s2.NAME = 'MyTable' So I now know which

    Read the article

  • Tracking subdomains in the same profile as the main domain

    - by Osvaldo
    I have a site, let's call it http://www.example.com with a non-universal Google analytics account. Now we have to add new functionalities in a subdomain like https://subdomain.example.com as a micro site. On that subdomain the URL's will be something like https://subdomain.example.com?param1=foo&param2=bar We can't change the requirements as both main site and mini-site use a different CMS/application. This is strictly a Google Analytics question. But we need to count pageviews and events that happen in that subdomain (with URLs like https://subdomain.example.com?param1=foo&param2=bar) as belonging to the main domain. So pageviews and events in https://subdomain.example.com?param1=foo&param2=bar need to be recorded as if they happened in http://www.example.com/path/to/whatever/I/want Fortunately we have full control on JavaScript in the main domain site and in the subdomain site too. How can we make this work? Do we need to change tracking code both in the main domain and subdomains? Do we need to reconfigure Google Analytics? Please note again that we do not want to create a new view for the subdomain. Both mini-site and main site should be in the same account, property and view.

    Read the article

  • Unity , libgdx, or something else to develop my first game for Android?

    - by capcom
    I want to start by saying that I absolutely love Unity (even more when I team it up with Blender). I really want to start developing games for Android, but it seems like Unity poses way too many roadblocks in terms of which devices it supports (and even if it does support them, it doesn't work well on all of them). I've been looking around for alternatives, and found something called libgdx. Well, it's nothing like Unity unfortunately, but at least it seems like I may be able to reach a larger audience in the market. I'd like to start by making 2D games, but with 3D graphics (say, imported from Blender). I can do this very easily in Unity, and it seems like it should be alright with libgdx too. But I really want to know if ditching Unity is a smart idea, considering how comfortable I am with it already, and how much I like it. Finally, is libgdx something you would recommend considering my requirements/situation? BTW, I am quite familiar with Eclipse too. Many thanks. Feel free to request further details.

    Read the article

  • Consultations with ATG Development at OpenWorld 2014

    - by Steven Chan (Oracle Development)
    Our OpenWorld 2014 San Francisco conference is about six weeks away.  We have a great lineup of sessions this year.  Our EBS Applications Technology track sessions are listed here, and we'll have a more-detailed article about those soon. One of the advantages of attending OpenWorld is that you can meet face-to-face with senior staff in ATG Development.  You can use these meetings to discusss your questions, requirements, plans, and deployment architectures with us. There are several options for doing this: At general sessions: collar the speaker of your choice after his or her presentation. At the Meet The Experts sessions:  these are first-come first-served round-table discussions Setting up private meetings via your Oracle account manager The last option is best if you have lots of in-depth questions or confidential details about your implementation that cannot be discussed in front of other customers.  Many of this blog's experts, including me, will be attending OpenWorld this year.  If you'd like to meet with us privately, please contact your Oracle account manager to arrange that as soon as possible.  My calendar, in particular, is already starting to fill up.  It is often completely full by the time OpenWorld starts. See you there!

    Read the article

  • My first blog post…

    - by steveh99999
    I’ve been meaning to start a blog for a while now, (OK, for several years…..) - finally now, here it begins First post, something really simple but, a wise-man once told me about the best way to improve SQL server performance. Store Less Data. That's it.. that's all there is to it... Over the years, I've seen the following :- -  a 200Gb database which held 3 days data. Once business requirements changed, we were able to hold only 1 days data in this database. -  a table developed by DBAs to hold application table cardinality information - that information was collected at 2 hour intervals every day for 7 years ! After 7 years the DBA space-info table had become the largest table in the database - 60 million rows !  It was a simple change to remove alot of the historical intra-day data and change the schedule to run only once per evening. Suddenly that table held 6 million rows instead of 60 million.... - lots of backup and restore history held in msdb. See this post by Brent Ozar for more details on this issue. Imagine how much faster the backups, DBCC Checks and reindexes ran when the above 3 changes were implemented ?   How often do you review your big databases \ tables to see if you’re actually holding only data that is really required by the business ?

    Read the article

  • In which fields does quality of the software product matter as much as the completion time?

    - by Nav
    Someone told me that if the software product meets clients expectations, it is good quality. But I've worked with Interaction Designers (the same kind of people who made Gmail's interface and usability so cool!), and I've loved working with them because even though they came up with hundreds of changes in requirements, and emphasised on many many subtle details, when the software was complete, I could look at the product and say WOW! The current place I work, the only thing that matters is completing the project on time. As long as it works and as long as the client says it's ok, nobody bothers to improve it. I'm not talking about gold-plating, but I believe that for a programmer to enjoy his (well, maybe her too ;) ) job, they should be able to proudly say that "Hey, I made that software" and that comes only when the product is of good quality. Apart from your opinions on this, I'd also like to know which fields (Eg. Aerospace, Finance etc.) could I find companies (or you could mention the company name) where the quality of a product is as important as completing the project on time?

    Read the article

  • Strategies for avoiding SQL in your Controllers... or how many methods should I have in my Models?

    - by Keith Palmer
    So a situation I run into reasonably often is one where my models start to either: Grow into monsters with tons and tons of methods OR Allow you to pass pieces of SQL to them, so that they are flexible enough to not require a million different methods For example, say we have a "widget" model. We start with some basic methods: get($id) insert($record) update($id, $record) delete($id) getList() // get a list of Widgets That's all fine and dandy, but then we need some reporting: listCreatedBetween($start_date, $end_date) listPurchasedBetween($start_date, $end_date) listOfPending() And then the reporting starts to get complex: listPendingCreatedBetween($start_date, $end_date) listForCustomer($customer_id) listPendingCreatedBetweenForCustomer($customer_id, $start_date, $end_date) You can see where this is growing... eventually we have so many specific query requirements that I either need to implement tons and tons of methods, or some sort of "query" object that I can pass to a single -query(query $query) method... ... or just bite the bullet, and start doing something like this: list = MyModel-query(" start_date X AND end_date < Y AND pending = 1 AND customer_id = Z ") There's a certain appeal to just having one method like that instead of 50 million other more specific methods... but it feels "wrong" sometimes to stuff a pile of what's basically SQL into the controller. Is there a "right" way to handle situations like this? Does it seem acceptable to be stuffing queries like that into a generic -query() method? Are there better strategies?

    Read the article

  • Should I redo an abandoned project with Lightswitch?

    - by Elson
    I had a small project that I was doing on the side. It was basically a couple of forms linked to a DB. Access was out, because it was a specifically meant to be a web application. Being a small project I used ASP.NET Dynamic Data, but, for various reasons, the project ended before deployment. I met the client recently, and he said there was a need for it still. I'm considering restarting the project with Dynamic Data, but I've seen some Lightswitch demos, and was suitably impressed with the BETA. I will wait for RTM if I use it, but is it a good idea to use Lightswitch to replace the Dyanmic Data? The amount of work I put into the Dynamic Data site isn't really an issue. Additional information: It's a system that tracks production in a small factory, broken down by line, machine, section and will generate reports. I would guess that the data structure will remain fairly constant over time, but that the reporting requirements will grow. The other thing is that the factory is part of a larger group, and I'm hopeful that, if this system succeeds, similar work with be forthcoming for other factories.

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Is it appropriate to run a complex enterprise-system configuration and migration project in a similar way to a Scrum development project?

    - by AndyM
    I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?

    Read the article

  • Solutions for cheaply replacing poorly-supported onboard ATI card with discreet graphics on desktop machine?

    - by echo-flow
    I have put Ubuntu on my mum's desktop computer. Unfortunately, the open source radeon driver does not work well with the onboard ATI graphics, and ATI's proprietary driver no longer supports the hardware at all. In order to use the ATI proprietary driver with this hardware, it is necessary to use an older version of Xorg, which is now only available in versions of Ubuntu older than 8.10. Unfortunately, the open source radeon driver seems to be causing X to lock up intermittently when my mum uses Audacity. I'm willing to accept that some hardware is not well-supported on Ubuntu, and so, because this is a desktop computer with a couple of free PCI slots, I think a better solution might simply be to plug in a new graphics card that might have better driver support, and to disable the onboard ATI card in the BIOS. The requirements for this card are that it be inexpensive and have robust (preferably open source) driver support in Ubuntu 10.04. Heavy-duty graphics processing power is not a requirement. A second-hand card on Ebay would also be fine. Can anyone make some recommendations?

    Read the article

  • Dynamic Query Generation : suggestion for better approaches

    - by Gaurav Parmar
    I am currently designing a functionality in my Web Application where the verified user of the application can execute queries which he wishes to from the predefined set of queries with where clause varying as per user's choice. For example,Table ABC contains the following Template query called SecretReport "Select def as FOO, ghi as BAR from MNO where " SecretReport can have parameters XYZ, ILP. Again XYZ can have values as 1,2 and ILP can have 3,4 so if the user chooses ILP=3, he will get the result of the following query on his screen "Select def as FOO, ghi as BAR from MNO where ILP=3" Again the user is allowed permutations of XYZ / ILP My initial thought is that User will be shown a list of Report names and each report will have parameters and corresponding values. But this approach although technically simple does not appear intuitive. I would like to extend this functionality to a more generic level. Such that the user can choose a table and query based on his requirements. Of course we do not want the end user to take complete control of DB. But only tables and fields that are relevant to him. At present we are defining what is relevant in the code. But I want the Admin to take over this functionality such that he can decide what is relevant and expose the same to the user. On user's side it should be intuitive what is available to him and what queries he can form. Please share your thoughts what is the most user friendly way to provide this feature to the end user.

    Read the article

  • Where to place php libraries/extensions?

    - by gdaniel
    I am new to a lot of server configurations and options. I want to add extra php libraries/extensions to my server. Where do I add them? (I am on a CENTOS 6.5 VPS) For example, I want to add the phpseclib php extension: Their website instructions are minimum: Usage This library is written using the same conventions that libraries in the PHP Extension and Application Repository (PEAR) used to be written in (current requirements break PHP4 compatibility). In particular, this library needs to be in your include_path: <?php set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib'); include('Net/SSH2.php'); ?> It tells me how to use it, but it doesn't tell me where to add the actual extension files. Should I added it under? usr/local/lib ? usr/local/lib/php ? usr/local/lib/php/pear ? Or can I add it under public_html? Also, my VPS has several users under /home/.. is that away to make the library available for only one user?

    Read the article

  • Looking for job advice [closed]

    - by EntryLevelJavaDeveloper
    I am a software developer for a government agency in DC, and I have recently completed one year of employment. I am generally dissatisfied with my experiences here. I do not want to gripe too much, but I do not spend a lot of time doing actual development on projects. I am asked to do everything under the sun: write requirements, review specs, test, attend random meetings, but actual coding makes up a small fraction of my time. The coding itself is fairly straightforward and simple so it feels like I am not growing from my experiences. I am not tasked with more challenging work, and I find the experiences are not rewarding. If I had a stronger resume/more work experience, I'd leave the position immediately but combined with the present economy, I am hesitant to leave. I have several questions: Does anybody have experiences like this? How did you make the most of it? I am currently doing some side projects, making simple webpages for people, but aside from that, and open source projects, what other things are out there? What are general benchmarks for a developer after one year of professional experience? What should I be expected to know/do? I am outsider (coming from a math/science background) so I do not know what exactly I should know/do. Is it possible to obtain a mentorship with a mid/senior developer to learn? If so, how can I go about making contacts in the DC area?

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • Help deciding on language for a complex desktop - web application

    - by user967834
    I'm about to start working on a fairly complex project needing a desktop GUI as well as a web interface and I need to decide on a language(s) to use. This is from an electrical engineering/robotics background. These are the requirements: Program will have to read data from multiple sensors and inputs (motion sensor, temperature sensor, capacitive sensor, infrared, magnetic sensors, etc) through a port on a computer - so either through USB or ethernet. Program will have to be able to send control signals based on this input. Program will have to continuously monitor all input signals at all times - so realtime data. Program will require authentication. Program will need to be controllable from a web interface from anywhere via logging in to a website. Web interface will also need to have realtime feedback once authenticated. What language do you think would best accomplish this? I was thinking maybe saving everything into a database which can be accessed by both the desktop and web app? And would Python be able to do all of this? Or something like a remote desktop app? I know this is a complex project but let's assume I can learn any language. Has anyone done something like this and if so how did you accomplish it?

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Uralelektrostroy Improves Turnaround Times for Engineering and Construction Projects by Approximately 50% with Better Project Data Management

    - by Melissa Centurio Lopes
    LLC Uralelektrostroy was established in 1998, to meet the growing demand for reliable energy supply, which included the deployment and operation of a modern power grid system for Russia’s booming economy and industrial sector. To rise to the challenge, the country required a company with a strong reputation and the ability to strategically operate energy production and distribution facilities. As a renowned energy expert, Uralelektrostroy successfully embarked on the mission—focusing on the design, construction, and operation of power grids, transmission lines, and generation facilities. Today, Uralelektrostroy leads the Russian utilities industry with operations across the country, particularly in the Ural, Western Siberia, and Moscow regions. Challenges: Track work progress through all engineering project development stages with ease—from planning and start-up operations, to onsite construction and quality assurance—to enhance visibility into complex projects, such as power grid and power-transmission-line construction Implement and execute engineering projects faster—for example, designing and building power generation and distribution facilities—by better monitoring numerous local subcontractors Improve alignment of project schedules with project owners’ requirements—awarding federal and regional authorities—to avoid incurring fines for missing deadlines Solutions: Used Oracle’s Primavera P6 Enterprise Project Portfolio Management 8.1 to streamline communication with customers and subcontractors through better data management and harmonized reporting, reducing construction project implementation and turnaround times by approximately 50%, on average Enabled fast generation of work-in-progress reports that track project schedules, budgets, materials, and staffing—from approval and material procurement, to construction and delivery Reduced the number of construction sites by nearly 30% (from 35 to 25) by identifying unprofitable sites—streamlining operations at the company’s construction site network and increasing profitability Improved project visibility by enabling managers to efficiently track project status, ensuring on-time reporting and punctual project deliveries to federal customers to reduce delay penalties to zero “Oracle’s Primavera P6 Enterprise Project Portfolio Management 8.1 drastically changed the way we run our business. We’ve reduced the number of redundant assets, streamlined project implementation and execution, and improved collaboration with our customers and contractors. Overall, the Oracle deployment helped to increase our profitability.” – Roman Aleksandrovich Naumenko, Head of Information Technology, LLC Uralelektrostroy Read the complete customer snapshot here.

    Read the article

  • Get the Latest Security Inside Out Newsletter, October Edition

    - by Troy Kitch
    The latest October edition of the Security Inside Out newsletter is now available and covers the following important security news: Securing Oracle Database 12c: A Technical Primer The new multitenant architecture of Oracle Database 12c calls for adopting an updated approach to database security. In response, Oracle security experts have written a new book that is expected to become a key resource for database administrators. Find out how to get a complimentary copy.  Read More HIPAA Omnibus Rule Is in Effect: Are You Ready? On September 23, 2013, the HIPAA Omnibus Rule went into full effect. To help Oracle’s healthcare customers ready their organizations for the new requirements, law firm Ballard Spahr LLP and the Oracle Security team hosted a webcast titled “Addressing the Final HIPAA Omnibus Rule and Securing Protected Health Information.” Find out three key changes affecting Oracle customers.  Read More The Internet of Things: A New Identity Management Paradigm By 2020, it’s predicted there will be 50 billion devices wirelessly connected to the internet, from consumer products to highly complex industrial and manufacturing equipment and processes. Find out the key challenges of protecting identity and data for the new paradigm called the Internet of Things.  Read More

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >