Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 217/854 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • LUKOIL Overseas Holding Optimizes Oil Field Development Projects with Integrated Project Management

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} LUKOIL Overseas Group is a growing oil and gas company that is an integral part of the vertically integrated oil company OAO LUKOIL. It is engaged in the exploration, acquisition, integration, and efficient development of oil and gas fields outside the Russian Federation to promote transforming LUKOIL into a transnational energy company. In 2010, the company signed a 20-year development project for the giant, West Qurna 2 oil field in Iraq. Executing 10,000 to 15,000 project activities simultaneously on 14 major construction and drilling projects in Iraq for the West Qurna-2 project meant the company needed a clear picture, in real time, of dependencies between its capital construction, geologic exploration and sinking projects—required for its building infrastructure oil field development projects in Iraq. LUKOIL Overseas Holding deployed Oracle’s Primavera P6 Enterprise Project Portfolio Management to generate structured project management information and optimize planning, monitoring, and analysis of all engineering and commercial activities—such as tenders, and bulk procurement of materials and equipment—related to oil field development projects. A word from LUKOIL Overseas Holding Ltd. “Previously, we created project schedules on desktop computers and uploaded them to the project server to be merged into one big file for each project participant to access. This was not scalable, as we’ve grown and now run up to 15,000 activities in numerous projects and subprojects at any time. With Oracle’s Primavera P6 Enterprise Project Portfolio Management, we can now work concurrently on projects with many team members, enjoy absolute security, and issue new baselines for all projects and project participants once a week, with ease.” – Sergey Kotov, Head of IT and the Communication Office, LUKOIL Mid-East Ltd. Oracle Primavera Solutions: · Facilitated managing dependencies between projects by enabling the general scheduler to reschedule all projects and subprojects once a week, realigning 10,000 to 15,000 project activities that the company runs at any time · Replaced Microsoft Project and a paper-based system with a complete solution that provides structured project data · Enhanced data security by establishing project management security policies that enable only authorized project members to edit their project tasks, while enabling each project participant to view all project data that are relevant to that individual’s task · Enabled the company to monitor project progress in comparison to the projected plan, based on physical project assets to determine if each project is on track to conclude within its time and budget limitations To view the full list of solutions view here. “Oracle Gold Partner Parma Telecom was key to our successful Primavera deployment, implementing the software’s basic functionalities, such as project content, timeframes management, and cost management, in addition to performing its integration with our enterprise resource planning system and intranet portal within ten months and in accordance with budgets,” said Rafik Baynazarov, head of the master planning and control office, LUKOIL Mid-East Ltd. “ To read the full version of the customer success story, please view here.

    Read the article

  • Semantic Versioning and splitting apart a library, providing a bundled build

    - by Derick Bailey
    I've got a nice, fairly popular JavaScript library that is following Semantic Versioning. The current library has a few dependency libraries, which are available either as separate downloads or as part of a single bundled download. I see a need to head down this path further. I want to extract additional, smaller libraries out of the one larger library. Each of these extracted libraries would be available as separate files, or inside of the one bundled build, again. If I go down this path of extracting the libraries, and providing a bundled version of the final code, does this require a full version change in semantic versioning? Would I have to bump from 1.x to 2.x? My first thought it no: I will not change any public API, so I don't have to change the major version number. But then I wonder... well, I am restructuring a lot of things, even though the final API for the bundled version would be the same. Is there a clear answer from semver on something like this? Do I need to bump first, second or third dot? Or something else?

    Read the article

  • Microsoft Certifications &ndash; how to prep? and why?

    - by Kelly Jones
    I often get asked by my colleagues, “how do you prepare for Microsoft exams?” Well, the answer for me is a little complicated, so I thought I’d write up here what I do. The first thing I do is go to Microsoft’s website to find the exam that I need to take.  If you’re looking to get a particular certification, then their site lists the exam or exams that you’ll need to pass.  If you’ve already taken an exam, you can log onto the MCP website and use their certification planner.  This little tool tells you what tests you need, based on the exams you’ve already passed.  It is very helpful with the certifications that are multiple tests and especially ones that have electives. Once you’ve identified the test, you can use Microsoft’s website to see the topics that it covers.  This is a good outline to follow when you study.  I’ll keep this handy to reference back throughout my studying to make sure that I’m covering all the topics I need to know. The next step is probably where I am a little different from others.  IF the exam outline covers material that I’ve already been working with, then I’ll skip a lot of the studying and go directly to the practice tests.  However, if I’m looking at the outline and wondering how in the world do you do that? – then it’s time to hit the books. So, where to find study materials?  Try typing in the exam number into any search engine.  You’ll typically find a ton of resources.  If you’re lucky, you’ll find books that others recommend based on their studying and exam experience.  As a Sogeti employee, I have access to three really good resources: an internal company list of all of the consultants who have passed particular tests (on our Connex website), Books 24x7, and Transcender practice exams. Once my studying is done (either through books or experience), I’ll go through the practice exams.  I find them really helpful in getting my knowledge lined up to the thinking process that the exam writers use.  If I’m relying on my experience, then this really helps me to identify gaps in my knowledge that I’ll need to fill. That’s about it.  If I’m doing ok on the practice exams, then I’ll take the real thing.  I’ve found that the practice exams are usually more difficult than then real thing. Oh – one other thing I do related to Microsoft exams – I try to take any beta exams that Microsoft makes available that fall into my skill set.  Microsoft has started a blog to announce these and the seats usually fill up really quick.  The blog is at http://blogs.technet.com/betaexams/ . You don’t get your results instantly, like a normal exam, instead you have to wait for everyone to finish taking the beta exams and for Microsoft to determine which questions they are using and which they are dropping.  So, be prepared to wait six to eight weeks for your results.

    Read the article

  • Remote Working & Relocation

    - by James Burgess
    Sorry if this question is a duplicate, I did some extensive searching and found nothing on quite the same topic (though a couple on partially-overlapping topics). Recently, whilst on holiday in Munich, Germany, I was taken aback by the sheer number of programming-related posts available in the city that I easily qualify for (both in terms of knowledge, and experience). The advertised working environments seemed good and the pay seemed to be at least as good as what I'd expect here in the UK. Probably 80% of the advertisements I saw on the underground were for IT-related jobs, and a good 60% of those I was easily qualified for. At the moment, I work as a freelancer mostly on web and small software projects, but seeing the vast availability of jobs in Munich versus my local area has me thinking about remote working. I'm unable to relocate for a job for the next 3 years (my wife has a contract to continue being a doctor at her current hospital for that time) but would almost certainly be open to it after that (after all, my wife and I both love Munich). In the meanwhile, I would be very interested in remote-working. So, my question is thus do companies ever take on remote workers (even with semi-frequent trips to the office) from abroad, with a view to later relocation? And, if so, how do you go about broaching the topic with a recruiter when getting in contact about a job posting? Language isn't a barrier for me, here, as 90% of the jobs I've looked up in Munich don't require German speakers (seems they have a big recruiting market abroad). I'm also under no illusions about the disadvantages of remote working, but I'm more interested in the viability of the scenario rather than the intricacies (at least at this point). I'd really appreciate any contributions, especially from those who have experience with working in such a scenario!

    Read the article

  • Is my concept in open source license correct?

    - by tester
    I would like to justify whether my concept in the open source license is correct, as you know that, misunderstanding the terms may lead to a serious law sue. Thank you. The main difference among the open source license is whether the license is copyleft. Copyleft license means allow the others to reproduce, modify and distribute the products but the released product is bound by the same licensing restriction. That means they have to use the same license for the modified version. Also, the copyleft license require all the released modified version to be free software. On the other hand, if any others create derived work incorporating non-copyleft licensed code, they can choose any license for the code. The serveral kinds of license and comparsion GPL is a restrictive license. Software requires to released as GPL license if that integrate or is modified from the other GPL license software . The library used in developing GPL license software are also restricted to GPL and LGPL , proprietary software are not allowed to employ (or complied with) in any part of the GPL application. LGPL is similar to GPL , but was more permissive with regarding allow the using of other non-GPL software. BSD is relatively simple license, it allow developer to do anything on the original source code . The license holder do not hold any legal responsibilities for their released product. Apache license is evolved from the BSD license. The legal terms are improved and are written by legal professionals in a more modern way. It covers comprehensive intellectual property ownership and liability issues. Also, are there any popular license beside these? Thank you

    Read the article

  • Initializing entities vs having a constructor parameter

    - by Vee
    I'm working on a turn-based tile-based puzzle game, and to create new entities, I use this code: Field.CreateEntity(10, 5, Factory.Player()); This creates a new Player at [10; 5]. I'm using a factory-like class to create entities via composition. This is what the CreateEntity method looks like: public void CreateEntity(int mX, int mY, Entity mEntity) { mEntity.Field = this; TileManager.AddEntity(mEntity, true); GetTile(mX, mY).AddEntity(mEntity); mEntity.Initialize(); InvokeOnEntityCreated(mEntity); } Since many of the components (and also logic) of the entities require to know what the tile they're in is, or what the field they belong to is, I need to have mEntity.Initialize(); to know when the entity knows its own field and tile. The Initialize(); method contains a call to an event handler, so that I can do stuff like this in the factory class: result.OnInitialize += () => result.AddTags(TDLibConstants.GroundWalkableTag, TDLibConstants.TrapdoorTag); result.OnInitialize += () => result.AddComponents(new RenderComponent(), new ElementComponent(), new DirectionComponent()); This works so far, but it is not elegant and it's very open to bugs. I'm also using the same idea with components: they have a parameterless constructor, and when you call the AddComponent(mComponent); method in an entity, it is the entity's job to set the component's entity to itself. The alternative would be having a Field, int, int parameters in the factory class, to do stuff like: new Entity(Field, 10, 5); But I also don't like the fact that I have to create new entities like this. I would prefer creating entities via the Field object itself. How can I make entity/component creation more elegant and less prone to bugs?

    Read the article

  • MVC + 3 tier; where ViewModels come into play?

    - by mikhairu
    I'm designing a 3-tiered application using ASP.NET MVC 4. I used the following resources as a reference. CodeProject: MVC + N-tier + Entity Framework Separating data access in ASP.NET MVC I have the following desingn so far. Presentation Layer (PL) (main MVC project, where M of MVC was moved to Data Access Layer): MyProjectName.Main Views/ Controllers/ ... Business Logic Layer (BLL): MyProjectName.BLL ViewModels/ ProjectServices/ ... Data Access Layer (DAL): MyProjectName.DAL Models/ Repositories.EF/ Repositories.Dapper/ ... Now, PL references BLL and BLL references DAL. This way lower layer does not depend on the one above it. In this design PL invokes a service of the BLL. PL can pass a View Model to BLL and BLL can pass a View Model back to PL. Also, BLL invokes DAL layer and DAL layer can return a Model back to BLL. BLL can in turn build a View Model and return it to PL. Up to now this pattern was working for me. However, I've ran into a problem where some of my ViewModels require joins on several entities. In the plain MVC approach, in the controller I used a LINQ query to do joins and then select new MyViewModel(){ ... }. But now, in the DAL I do not have access to where ViewModels are defined (in the BLL). This means I cannot do joins in DAL and return it to BLL. It seems I have to do separate queries in DAL (instead of joins in one query) and BLL would then use the result of these to build a ViewModel. This is very inconvenient, but I don't think I should be exposing DAL to ViewModels. Any ideas how I can solve this dilemma? Thanks.

    Read the article

  • SnapBird Supercharges Your Twitter Searches

    - by ETC
    Twitter’s default search tool is a bit anemic. If you want to supercharge your Twitter search, fire up web-based search tool SnapBird and dig into your past tweets as well as those of friends and followers. Yesterday I was trying to find a tweet I’d sent some time last year regarding my search for an application that could count keystrokes for inclusion in my review of the app I finally found to fulfill the need, KeyCounter. Searching for it with Twitter’s search tool yielded nothing. One simple search at SnapBird and I nailed it. SnapBird requires no authentication to search public tweets (both your own and those of your friends and follows) but does require authentication in order to search through your sent and received direct messages. SnapBird is a free service. SnapBird Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Dynamic libraries are not allowed on iOS but what about this?

    - by tapirath
    I'm currently using LuaJIT and its FFI interface to call C functions from LUA scripts. What FFI does is to look at dynamic libraries' exported symbols and let the developer use it directly form LUA. Kind of like Python ctypes. Obviously using dynamic libraries is not permitted in iOS for security reasons. So in order to come up with a solution I found the following snippet. /* (c) 2012 +++ Filip Stoklas, aka FipS, http://www.4FipS.com +++ THIS CODE IS FREE - LICENSED UNDER THE MIT LICENSE ARTICLE URL: http://forums.4fips.com/viewtopic.php?f=3&t=589 */ extern "C" { #include <lua.h> #include <lualib.h> #include <lauxlib.h> } // extern "C" #include <cassert> // Please note that despite the fact that we build this code as a regular // executable (exe), we still use __declspec(dllexport) to export // symbols. Without doing that FFI wouldn't be able to locate them! extern "C" __declspec(dllexport) void __cdecl hello_from_lua(const char *msg) { printf("A message from LUA: %s\n", msg); } const char *lua_code = "local ffi = require('ffi') \n" "ffi.cdef[[ \n" "const char * hello_from_lua(const char *); \n" // matches the C prototype "]] \n" "ffi.C.hello_from_lua('Hello from LUA!') \n" // do actual C call ; int main() { lua_State *lua = luaL_newstate(); assert(lua); luaL_openlibs(lua); const int status = luaL_dostring(lua, lua_code); if(status) printf("Couldn't execute LUA code: %s\n", lua_tostring(lua, -1)); lua_close(lua); return 0; } // output: // A message from LUA: Hello from LUA! Basically, instead of using a dynamic library, the symbols are exported directly inside the executable file. The question is: is this permitted by Apple? Thanks.

    Read the article

  • Security in Robots and Automated Systems

    - by Roger Brinkley
    Alex Dropplinger posted a Freescale blog on Securing Robotics and Automated Systems where she asks the question,“How should we secure robotics and automated systems?”.My first thought on this was duh, make sure your robot is running Java. Java's built-in services for authentication, authorization, encryption/confidentiality, and the like can be leveraged and benefit robotic or autonomous implementations. Leveraging these built-in services and pluggable encryption models of Java makes adding security to an exist bot implementation much easier. But then I thought I should ask an expert on robotics so I fired the question off to Paul Perrone of Perrone Robotics. Paul's build automated vehicles and other forms of embedded devices like auto monitoring of commercial vehicles on highways.He says that most of the works that robots do now are autonomous so it isn't a problem in the short term. But long term projects like collision avoidance technology in automobiles are going to require it.Some of the work he's doing with his Java-based MAX, set of software building blocks containing a wide range of low level and higher level software modules that developers can use to build simple to complex robot and automation applications faster and cheaper, already provide some support for JAUS compliance and because their based on Java, access to standards based security APIs.But, as Paul explained to me, "the bottom line is…it depends on the criticality level of the bot, it's network connectivity, and whether or not a standards compliance is required."

    Read the article

  • Is it possible to make CSS-added text searchable by a browser?

    - by Andrew Stacey
    I run a website that uses CSS pseudo classes to insert text here and there. One of them inserts the value of a CSS counter (whereupon it would require considerable re-engineering of the system to do this without CSS text injection). The specific CSS rule is: .num_defn .theorem_label:after { content: " " counter(definition, decimal); counter-increment: definition; } and this converts "Definition" to "Definition 1" (say). However, the injected text is not searchable by the browser. It doesn't see the 1: if I search for "Definition 1" then it doesn't find it, and if I search for "Definition. Whatever the definition text was" then the browser happily highlights the line except for the inserted 1. So if you imagine the bold text as the highlighting, it would look like: Definition 1 . Whatever the definition text was This is not ideal! People like to refer to definitions by their number and to say "Look at Definition 1 on the page XYZ" (and in contexts where hyperlinks are not available - strange, I know, but it does happen). Thus: Is there any way that I, on the server end, can designate the injected text as "searchable"? If not, is there a simple way at the browser end that this can be enabled?

    Read the article

  • Can JSON be made easily and safely editable by the non-technical Excel crowd?

    - by glitch
    I'm looking for a data storage format that's very intuitive and easy to edit. It should be ideally targeted towards the same crowd as Excel. At the same time I would like the data structure to be a tree. Ideally this would be JSON, since it offers both the tree aspect and allows for more interesting constructs like arrays. That and parsing libraries for JSON are ubiquitous, so I don't have to reinvent the wheel. The problem is that, at least with a non-specialized text editor, JSON is a giant pain to edit for a non-technical user. I'm thinking along the lines of someone who might have used Excel in the past, but never a real text editor. Someone who might not be comfortable with the idea of preserving JSON syntax by hand. Are there data formats out there that would fit this profile? I'd very much prefer this to be a JSON actually, but then it would require a solid editing tool that would hide the underlying implementation from the user. Think Excel and how it abstracts CSV syntax from the user. The reason I'm looking for something like this is because the team has been working with pretty hierarchical data for a while now and we've hit the limits of how easy it is to represent in simple CSVs without having to create complex rules for how represent hierarchy semantics from each row. Any suggestions?

    Read the article

  • How do you get past the Analysis to Paralysis when working on a new project?

    - by Cape Cod Gunny
    I've been struggling with how to get my project going. I've got an old software package that is in need of desparate rewrite. I haven't compiled the source code since 2004. It still sells, it's stable but does require the “Run this program in compatibility mode for:” on a lot of the newer windows systems. It's also one of those hard coded 640 X 480 screen resolution programs. Yuck! I can't seem to get started with this rewrite. I'm constantly fiddling around with different things. I'll play around with different fluid layouts for a while. Then I start looking around at how the main menu should work/look. I quickly find out that there's this thing called "Cool Bars" and I'll spend hours playing with that. Then I start thinking about stuff like "Oh I need to make sure that the screen sizes are preserved so when the application gets relaunched it remebers how the screens were positioned." Which leads to what happens if they have two monitors? Which leads to what happens if they have a quad screen? Yikes it's got to stop. I have always been a slow starter. I think about stuff long and hard up front. This has always plagued me. Once I get my mind made up then bam... I'm off and running. I'm looking for advice from some other one-person software companies that can help someone like me get off to a quicker start?

    Read the article

  • Refactoring and Open / Closed principle

    - by Giorgio
    I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the Open Closed Principle: each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that do not influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface I and a corresponding implementation class A. When class A has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation B, and keep using A as long as B is not mature. When B is mature, all that is needed is to change how I is instantiated. If the new requirements suggest a change to the interface as well, I define a new interface I' and a new implementation A'. So I, A are frozen and remain the implementation for the production system as long as I' and A' are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of complex refactorings, "... because it is not possible to write code directly in its final form." Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class A, but when that class has been tested successfully it should be frozen?

    Read the article

  • How to analyze data

    - by Subhash Dike
    We are working on an application that allows user to search/read some content in a particular domain. We wanted to add some capability in the app which can suggest user some content based on the usage pattern (analyze data based on frequency and relevance). Currently every time user search or read something we do store that information in backend database. We would like to use this data to present some additional content to user. Could someone explain what kind of tools will be required for such a job and any example? And what this concept is called, data analysis? data mining? business intelligence? or something else? Update: Sorry for being too broad, here is an example SQL Database (Just to give an idea, actual db is little different with normalization and stuff) Table: UserArticles Fields: UserName | ArticleId | ArticleTitle | DateVisited | ArticleCategory Table: CategoryArticles Fields: Category | Article Title | Author etc. One Category may have one more articles. One user may have read the same article multiple times (in this case we place additional entry in the user article table. Task: Use the information availabel in UserArticle table and rank categories in order which would be presented to user automatically in other part of application. Factors to be considered are frequency and recency. This might be possible through simple queries or may require specialized tools. Either way, the task is what mention above. I am not too sure which route to take, hence the question. Thoughts??

    Read the article

  • Create MSDB Folders Through Code

    You can create package folders through SSMS, but you may also wish to do this as part of a deployment process or installation. In this case you will want programmatic method for managing folders, so how can this be done? The short answer is, go and look at the table msdb.dbo. sysdtspackagefolders90. This where folder information is stored, using a simple parent and child hierarchy format. To add new folder directly we just insert into the table - INSERT INTO dbo.sysdtspackagefolders90 ( folderid ,parentfolderid ,foldername) VALUES ( NEWID() -- New GUID for our new folder ,<<Parent Folder GUID>> -- Lookup the parent folder GUID if a child or another folder, or use the root GUID 00000000-0000-0000-0000-000000000000 ,<<Folder Name>>) -- New folder name There are also some stored procedures - sp_dts_addfolder sp_dts_deletefolder sp_dts_getfolder sp_dts_listfolders sp_dts_renamefolder To add a new folder to the root we could call the sp_dts_addfolder to stored procedure - EXEC msdb.dbo.sp_dts_addfolder @parentfolderid = '00000000-0000-0000-0000-000000000000' -- Root GUID ,@name = 'New Folder Name The stored procedures wrap very simple SQL statements, but provide a level of security, as they check the role membership of the user, and do not require permissions to perform direct table modifications.

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • PHP/MySQL Database application development tool

    - by RCH
    I am an amateur PHP coder, and have built a couple of dozen projects from scratch (including fairly simple e-commerce systems with user authentication, PayPal integration etc - all coded by hand from a clean page. Have also done a price comparison engine that takes data from multiple sites etc.). But I am no expert with OO and other such advanced techniques - I just have a fairly decent grasp of the basics of data processing, logic, functions and trying to optimize code as much as possible. I just want to make this clear so you have some idea of where I'm coming from. I have a couple of fairly large new projects on my plate for corporate clients - both require bespoke database-driven applications with complex relationships, many tables and lots of different front-end functions to manipulate that data for the internal staff in these companies. I figured building these systems from scratch would probably be a huge waste of time. Instead, there must be tools out there that will allow me to construct MySQL databases and build the pages with things like pagination, action buttons, table construction etc. Some kind of database abstraction layer, or system generator, if you will. What tool do you recommend for such a purpose for someone at my level? Open source would be great, but I don't mind paying for something decent as well. Thanks for any advice.

    Read the article

  • C++ and system exceptions

    - by Abyx
    Why standard C++ doesn't respect system (foreign or hardware) exceptions? E.g. when null pointer dereference occurs, stack isn't unwound, destructors aren't called, and RAII doesn't work. The common advice is "to use system API". But on certain systems, specifically Win32, this doesn't work. To enable stack unwinding for this C++ code // class Foo; // void bar(const Foo&); bar(Foo(1, 2)); one should generate something like this C code Foo tempFoo; Foo_ctor(&tempFoo); __try { bar(&tempFoo); } __finally { Foo_dtor(&tempFoo); } Foo_dtor(&tempFoo); and it's impossible to implement this as C++ library. Upd: Standard doesn't forbid handling system exceptions. But it seems that popular compilers like g++ doesn't respect system exceptions on any platforms just because standard doesn't require this. The only thing that I want - is to use RAII to make code readable and program reliable. I don't want to put hand-crafted try\finally around every call to unknown code. For example in this reusable code, AbstractA::foo is such unknown code: void func(AbstractA* a, AbstractB* b) { TempFile file; a->foo(b, file); } Maybe one will pass to func such implementation of AbstractA, which every Friday will not check if b is NULL, so access violation will happen, application will terminate and temporary file will not be deleted. How many months uses will suffer because of this issue, until either author of func or author of AbstractA will do something with it? Related: Is `catch(...) { throw; }` a bad practice?

    Read the article

  • An ideal way to decode JSON documents in C?

    - by AzizAG
    Assuming I have an API to consume that uses JSON as a data transmission method, what is an ideal way to decode the JSON returned by each API resource? For example, in Java I'd create a class for each API resource then initiate an object of that class and consume data from it. for example: class UserJson extends JsonParser { public function UserJson(String document) { /*Initial document parsing goes here...*/ } //A bunch of getter methods . . . . } The probably do something like this: UserJson userJson = new UserJson(jsonString);//Initial parsing goes in the constructor String username = userJson.getName();//Parse JSON name property then return it as a String. Or when using a programming language with associative arrays(i.e., hash table) the decoding process doesn't require creating a class: (PHP) $userJson = json_decode($jsonString);//Decode JSON as key=>value $username = $userJson['name']; But, when I'm programming in procedural programming languages (C), I can't go with either method, since C is neither OOP nor supports associative arrays(by default, at least). What is the "correct" method of parsing pre-defined JSON strings(i.e., JSON documents specified by the API provider via examples or documentation)? The method I'm currently using is creating a file for each API resource to parse, the problem with this method is that it's basically a lousy version of the OOP method, as it looks exactly like the OOP method but doesn't provide any OOP benefits(e.g., can't pass an object of the parser, etc.). I've been thinking about encapsulating each API resource parser file in a publicly accessed structure(pointing all functions/publicly usable variables to the structure) then accessing the parser file code from within the structure(parser.parse(), parser.getName(), etc.). As this way looks a bit better than the my current method, it still just a rip off the OOP way, isn't it? Any suggestions for methods to parse JSON documents on procedural programming lanauges? Any comments on the methods I'm currently using(either 3 of them)?

    Read the article

  • MRP/SCP (Not ASCP) Common Issues

    - by Annemarie Provisero
    ADVISOR WEBCAST: MRP/SCP (Not ASCP) Common Issues PRODUCT FAMILY: Manufacturing - Value Chain Planning   March 9, 2010 at 8 am PT, 9 am MT, 11 am ET   This session is intended for System Administrators, Database Administrator's (DBA), Functional Users, and Technical Users. We will discuss issues that are fairly common and will provide the general solutions to same. We will not only review power point information but review some of the application setups/checks as well. TOPICS WILL INCLUDE: Gig data memory limitation Setup Requirements for MRP Manager, Planning Manager, and Standard Manager Why components are not planned Sales Order Flow to MRP Calendars Patching Miscellaneous Forecast Consumption - only if we have time A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Dealing with bad/incomplete/unclear specifications?

    - by eagerMoose
    I'm working on a project where our dev team gets the specifications from the business part of the company. Both the business management and the IT management require estimates and deadline projections, as they should. The good thing is that estimates are mostly made by the actual developers who get to do the required features. The bad thing is that the specifications are usually either too simple (it turns out you're left with a lot of question marks over your head because a lot of information seems to be missing) or too complex(up to the point that you can't even visualize where everything would "fit" in the app). More often than not, the business part of the specs are either incomplete or unaware of what can and can't be done (given the previously implemented business logic). Dev team is given about a day per new spec to give an estimate and we do try to clear uncertainties, usually by meeting up with whoever did the spec. Most of the times it turns out that spec writers haven't really thought everything through, and it's usually only when we start designing and developing that we end up in trouble, as a lot of the spec seems to have holes. How do you deal with this? Are you generous on estimates in advance?

    Read the article

  • Everything has an Interface [closed]

    - by Shane
    Possible Duplicate: Do I need to use an interface when only one class will ever implement it? I am taking over a project where every single real class is implementing an Interface. The vast majority of these interfaces are implemented by a single class that share a similar name and the exact same methods (ex: MyCar and MyCarImpl). Almost no 2 classes in the project implement more than the interface that shares its name. I know the general recommendation is to code to an interface rather than an implementation, but isn't this taking it a bit too far? The system might be more flexible in that it is easier to add a new class that behaves very much like an existing class. However, it is significantly harder to parse through the code and method changes now require 2 edits instead of 1. Personally, I normally only create interfaces when there is a need for multiple classes to have the same behavior. I subscribe to YAGNI, so I don't create something unless I see a real need for it. Am I doing it all wrong or is this project going way overboard?

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >