Search Results

Search found 23901 results on 957 pages for 'mysql stored procedure'.

Page 511/957 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • MSDN / TechNet Key Importer for KeePass 2

    - by Stacy Vicknair
    If you have an MSDN account and, like me, systematically claim keys just as well as you systematically forget which keys you’ve used in which test environments! Well, in a meager attempt to help myself track my keys I created an importer for KeePass 2 that takes in the XML document that you can export from MSDN and TechNet. The source is available at https://github.com/svickn/MicrosoftKeyImporterPlugin.   How do I get my KeysExport.xml from MSDN or TechNet? Easy! First, in MSDN, go to your product keys. From there, at the top right select Export to XML. This will let you download an XML file full of your Microsoft Keys.   How do I import it into KeePass 2? The instructions are simple and available in the GitHub ReadMe.md, so I won’t repeat them. Here is a screenshot of what the imported result looks like:   As you can see, the import process creates a group called Microsoft Product Keys and creates a subgroup for each product. The individual entries each represent an individual key, stored in the password field. The importer decides if a key is new based on the key stored in the password, so you can edit the notes or title for the individual entries however you please without worrying about them being overwritten or duplicated if you re-import an updated KeysExport.xml from MSDN! This lets you keep track of where those pesky keys are in use and have the keys available anywhere you can access your KeePass database!   Technorati Tags: KeePass,KeePass 2,MSDN,TechNet

    Read the article

  • Where to place the R code for R+Sweave+LaTeX workflow

    - by claytontstanley
    I spent the last week learning 3 new tools: R, Sweave, and LaTeX. One question that came to my mind though when working through my first project: Where do I place the majority of the R code? The tutorials that I read online placed the majority of the R code in the LaTeX .Rnw file. However, I find having a bunch of R calculations in the LaTeX file distracting. What I do find extremely helpful (of course) is to call out to R code in the LaTeX file and embed the result. So the workflow I've been using is to place 99% of my R code in my .R file. I run that file first, save a bunch of calculations as objects, and output the .Rout file once finished (to save the work). Then when running Sweave, I load up that .Rout file, so that I have the majority of my calculations already completed and in the Sweave R session. Then my LaTeX callouts to R are quite simple: Just give me the XTable stored in 'res.table', or give me the result of an already-computed calculation stored in the variable 'res'. So I push towards the minimal amount of R code in the LaTex file possible, to achieve the desired result (embedding stats results in the LaTeX writeup). Does anyone have any experience with this approach? I'm just worried I might run into trouble further down the line, when I start really trying to load up and leverage this workflow.

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • Simple Architecture Verification

    - by Jean Carlos Suárez Marranzini
    I just made an architecture for an application with the function of scoring, saving and loading tennis games. The architecture has 2 kinds of elements: components & layers. Components: Standalone elements that can be consumed by other components or by layers. They might also consume functionality from the model/bottom layer. Layers: Software components whose functionality rests on previous layers (except for the model layer). -Layers: -Models: Data and it's behavior. -Controllers: A layer that allows interaction between the views and the models. -Views: The presentation layer for interacting with the user. -Components: -Persistence: Makes sure the game data can be stored away for later retrieval. -Time Machine: Records changes in the game through time so it's possible to navigate the game back and forth. -Settings: Contains the settings that determine how some of the game logic will apply. -Game Engine: Contains all the game logic, which it applies to the game data to determine the path the game should take. This is an image of the architecture (I don't have enough rep to post images): http://i49.tinypic.com/35lt5a9.png The requierements which this architecture should satisfy are the following: Save & load games. Move through game history and see how the scoreboard changes as the game evolves. Tie-breaks must be properly managed. Games must be classified by hit-type. Every point can be modified. Match name and player names must be stored. Game logic must be configurable by the user. I would really appreciate any kind of advice or comments on this architecture. To see if it is well built and makes sense as a whole. I took the idea from this link. http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

    Read the article

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • accessing live usb files from new hd ubuntu install

    - by Robin Bailey
    After my live USB (ubuntu 12.04 lts) refused to boot, I proceeded to install the same Ubuntu version on the laptop hard drive (a dual boot next to Win xp). This all went well without a hitch. Previous to this, I spent several weeks enjoying and exploring ubuntu from the usb pendrive. During this time I changed lots of settings and customized Firefox and more. Now, I'd like to import the home folder from the usb drive into the new install home folder on the hard disk, which is the purported folder that holds all those special settings to my knowledge. Unfortunately and only being familiar with Windows file systems, the view of the usb file system from the new hdd install is totally perplexing. I can't find anything that looks anywhere close to the original file system. More, I can't find any of the files I had created and stored there, like the LibreOfficeCalc file that has all my passwords (this one is really discouraging) that was stored on the ubuntu desktop. Help me find this file alone and I'll bow down with full apologies to any and all computer gods. Being able to import all those customizing settings into the new install would be a major bonus also, but hey, I'm not greedy. I'll take the passwords file and be happy! And humble! I would be very grateful for some clear, understandable help on this. Thanks

    Read the article

  • Any mobile-friendly Credit Card billing solutions for mobile sites similar to Bango?

    - by Programmer
    Are there any mobile-friendly Credit Card billing solutions for mobile sites similar to Bango? The advantages of Bango I have seen compared to regular Credit Card solutions that make it considerably "mobile-friendly" are: 1) It does not require the user to enter their full name and billing address to make a payment. The user is only required to enter their Credit Card number, expiration date, and CVC code (if they are in the U.S., they will also have to enter their Zip Code). That is significantly less input than is normally required for Credit Card payments, which is a big plus on small mobile key pads. After a user makes an initial Credit Card payment, their details are stored by Bango, and the next time the user needs to make a payment with the same Credit Card, they just have to click a single link and it processes the payment on their stored Credit Card. Needless to say, this is very convenient for mobile users as it is analogous to Direct Carrier Billing as far as the user is concerned since they won't need to input any details. The downside with Bango is that their fees are higher than others, all payments must be processed via their site and branding, there is a high minimum ($1.99) and a low maximum ($30) on how much you can charge users, and you need to pay a monthly fee on top of the high transaction costs. It is due to the downsides mentioned above that I am looking for an alternative solution that also does the advantages 1) and 2) above. Is there anything like that? I looked at JunglePay and they do neither 1) nor 2).

    Read the article

  • I need an approach to the problem of preventing inserting duplicate records into the database

    - by Maurice
    Apologies is this question is asked on the incorrect "stack" A webservice that I call returns a list of data. The data from the webservice is updated periodically, so a call to the webservice done in one hour could return the same data as a call done in an hour. Also, the data is returned based on a start and end date. We have multiple users that can run the webservice search, and duplicate data is most likely to be returned (especially for historical data). However I don't want to insert this duplicate data in the database. I've created a db table in which the data is stored (most important columns are) Id int autoincrement PK Date date not null --The date to which the data set belongs. LastUpdate date not null --The date the data set was last updated. UserName varchar(50) --The name of the user doing the search. I use sql server 2008 express with c# 4.0 and visual studio 2010. Entity Framework is used as the ORM. If stored procedures could be avoided in the proposed solution, then that will be a plus. Another way of looking interpreting what I'm asking a solution for is as follows: I have a million unique records in my table. A user does a new search. The search results from the user contains around 300k of the data that is already in the db. An efficient solution to finding an inserting only the unique records is needed.

    Read the article

  • How does datomic handle "corrections"?

    - by blueberryfields
    tl;dr Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of "was a True on Tuesday at 12:00pm?" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?

    Read the article

  • Git repo: Unravelling my mess into tidy branches

    - by Martin
    I wanted to play with a project, so git cloned it and, following its instructions, created a local branch for my configuration (I guess so that users can merge updates back). At first I was just tweaking to suit my preferences, so I didn't bother with any further branching, but now I have some code that might be useful to someone else, but with my passwords, etc in the same branch. Effectively, I have one big branch from which I'd like to have: Postgres backend (default) but with some new code I've added MySQL backend (the biggest change I've made) with that same new code My settings: I can't git ignore the settings file because I occasionally have to add sections for new functionality, but I need to keep my personal settings out of the public branches! I guess this would work best as a local-only branch. Dev branches, which I would branch from the MySQL. Starting from scratch, I think I could figure out how to branch/merge the various updates, but is there an easy way to walk through the existing repo and choose which commits to apply to which branch? Or possibly create a branch from a point upstream then merge back, excluding certain commits?

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Dreaded SQLs

    - by lavanyadeepak
    Dreaded SQLs We used to think that a SQL statement without a where clause is only dangerous right since running that on a server TSQL is just going to impact the entire table like waving the magic wand. For that reason we should cultivate the habit first to write the statement as select and then to modify the select portion as update. Within the T-SQL Window, I would normally prefer the following first: select * from employee where empid in (4,5) and then once I am satisfied with the results, I would go ahead with the following change: --select * delete from employee where empid in (4,5) Today I just discovered another coding horror. This would typically be applicable in a stored procedure and with respect to variable nomenclature. It is always desirable to have a suitable nomenclature for parameters distinct from the column names and internal variables. This would help quicker debugging of the stored procedures besides enhancing the readability. Else in a quick bout of enthusiasm a statement like   if (@CustomerID = @CustomerID) [when the latter is intended to denote the column name and there is a superflous @ prepended], zeroing in on the problem would be little tricky. Had there been a still powerful nomenclature rules then debugging would have been more straight-forward and simpler right?

    Read the article

  • Precise: Evolution laggy due to IMAP -profile or due to some odd Sync -issue?

    - by Izzy
    I'm fighting with Evolution. Basically it's working fine -- but it is very slow to react in certain situations. There is apparently some problem with syncing and IMAP. Helper questios Could be that changing away from Bonobo has to do with slowing-down? There might be some trouble with the new engine and "asynchronous actions". What to do about it? I want to get the previous "working mood" back. How can I speed this thing up? Different scenarios when sending a mail, the composer window hangs there inactive for a couple of seconds, everything grayed out. Though there is a green check mark saying it's sent, I'm not sure a) why it's still blocking everything and b) whether I could simply close it without "breaking"/"losing" anything. In earlier versions, the composer window was closing pretty fast, and one could see the message being stored into the local "outbox" until it was sent, and one could immediately continue with the next task. I prefer that behaviour over the current. switching between modules. Coming from mail and switching to the address book takes a couple of seconds. Same for switching to the calendar. I read about different "possible causes" and tried a few things: I only have 3 local address books, so no networking should be involved here. To make sure, I switched to offline mode and then tried to access the address book. No noticeable difference. I use 3 Google Calendars. Switching to offline mode made a minor difference, but so minor that it also could be "imagination" since one might have expected this in this case according to some reports, disabling the tasks should help. Well, it didn't in my case, as I don't use them regularly (just two local items stored here)

    Read the article

  • Migration a database from 32bit to 64bit

    - by Mike Dietrich
    Database migrations from an 32bit environment to an 64bit environment keeping the same platform architecture (e.g. moving an Oracle 10.2.0.5 database from MS Windows XP 32bit to MS Windows Server 2003 64bit) does not happen that often anymore. But still we see them getting done. And there are a few things to note when doing such a move. First of all the important question is:Will you upgrade your database as part of this move - Yes or No? If you say "Yes" then you are almost done with that topic as we will take care of that bitnes move during the upgrade. The only thing you have to take care is OLAP in case you are using OLAP Option with Analytic Workspaces (AW) by yourself. Those store data in Binary LOBs - and in order to move AWs from 32bit to 64bit you have to export your AWs prior to the move - and import them later on. People who don't use OLAP don't have to take care on this. But if you say "No" (meaning: no upgrade actions involved - you keep your database version) then you have to make sure to invalidate all packages and stored code in the database before you shutdown your database in the 32bit environment and prior to moving it over. And the same rule as above for OLAP applies once you use the OLAP Option. In the source environment: startup upgrade;    -- [or startup migrate; -- for Oracle 9i] @?/rdbms/admin/utlirp.sqlshutdown immediate In the destination environment: startup upgrade @?/olap/admin/xumuts.plb --Only if OLAP Option is installed@?/rdbms/admin/utlrp.sql The script utlirp.sql will invalidate all packages and stored code, utlrp.sql will recompile - and xumuts.plb will rebuild the OLAP Analytic Workspaces in case you have the OLAP Option installed.

    Read the article

  • How do I start working as a programmer - what do I need?

    - by giorgo
    Hello, i am currently learning Java and PHP as I have some projects from university, which require me to apply both languages. Specifically, a Java GUI application, connecting to a MySQL database and a web application that will be implemented in PHP/MySQL. I have started learning the MVC pattern, Struts, Spring and I am also learning PHP with zend. My first question is: How can I find employment as a programmer/software engineer? The reason I ask is because I have sent my CV into many companys, but all of them stated that I required work experience. I really need some guidance on how to improve my career opportunites. At present, I work on my own and haven't worked in collaboration with anyone on a particular project. I'm assuming most people create projects and submit them along with their CVs. My second question is: Everyone has to make a start from somewhere, but what if this somewhere doesn't come? What do I need to do to create the circumstances where I can easily progress forward? Thanks

    Read the article

  • O'reilly certification in PHP worth it?

    - by editzombie
    I asked this question over on stack overflow but I didn't realise it wasn't really the place for not so technical questions. I've seen quite a few related threads on this forum so I thought I'd try and get some feedback here: This is my first time asking a question on this forum, though I´ve read it a lot. I apologise if this is repeating a thread. I´m interested in getting into web development. I am a video editor by trade but living in Spain the way things are at the moment its very difficult to find work. I have some very basic knowledge of HTML and CSS and a little bit of flash and have designed a few little personal websites myself. I also worked for a online marketing production company where I worked a little on blog design in Blogger amongst other social media. So thats my background, but I´m trying to expand my skills and get into web development as a career or in general part of my skill base, I was thinking particularly about PHP/MySQL. I have worked a little on some of the Lynda.com tutorials and have invested in a book (Sams Teach Yourself PHP, MySQL and Apache). I´m still finding it very difficult to progress. I know I should really try some practice projects (any reccomendations would be welcome). But I was also thinking about doing one of the O´Reilly certification courses and was wondering whether it would be worthwhile for a noob like me. I hear that the courses are associated with an American University which I guess gives it more clout. Any other thoughts you guys have about how to make progress in learning web development would be fantasic. Thanks in advance.

    Read the article

  • Is this table replicated?

    - by fatherjack
    Another in the potentially quite sporadic series of I need to do ... but I cant find it on the internet. I have a table that I think might be involved in replication but I don't know which publication its in... We know the table name - 'MyTable' We have replication running on our server and its replicating our database, or part of it - 'MyDatabase'. We need to know if the table is replicated and if so which publication is going to need to be reviewed if we make changes to the table. How? USE MyDatabase GO /* Lots of info about our table but not much that's relevant to our current requirements*/ SELECT * FROM sysobjects WHERE NAME = 'MyTable' -- mmmm, getting there /* To quote BOL - "Contains one row for each merge article defined in the local database. This table is stored in the publication database.replication" interesting column is [pubid] */ SELECT * FROM dbo.sysmergearticles AS s WHERE NAME = 'MyTable' -- really close now /* the sysmergepublications table - Contains one row for each merge publication defined in the database. This table is stored in the publication and subscription databases. so this would be where we get the publication details */ SELECT * FROM dbo.sysmergepublications AS s WHERE s.pubid = '2876BBD8-3D4E-4ED8-88F3-581A659E8144' -- DONE IT. /* Combine the two tables above and we get the information we need */ SELECT s.[name] AS [Publication name] FROM dbo.sysmergepublications AS s INNER JOIN dbo.sysmergearticles AS s2 ON s.pubid = s2.pubid WHERE s2.NAME = 'MyTable' So I now know which

    Read the article

  • Should business services cross bounded contexts?

    - by Paul T Davies
    Firstly, I am following the convention that a bounded context is synonymous to a department, or possibly one department has 1 to many bounded contexts. We have a client consultancy department that has a Documentation Service. Documents are stored in the Document Store Service (which is where all documents in the company are stored - it is a utility service), and the Documentation Service stores information about that document (a business service). As it was designed for the client consultancy, it is information relevant to them. Now health and safety need somewhere to store information about a document. This is different information to client consultancy, but I have been instructed to extend the existing service to account for this extra information. I feel this service is now crossing a bounded context. My worry is that all departments will eventually store there information in here and the service will become bloated, trying to be all things to all departments. Each document record will only store a subset of the information because it will only belong to one department. It will get worse when different departments want to store the same information but refer to it in a diferent ways, or when two departments want to store different information that they refer to in the same way. In my understanding, this is exactly the reason for bounded contexts. I feel each department should have it's own business service for information about a document, but use the same utility service to actually store the document. What would be the correct approach?

    Read the article

  • Methods for getting static data from obj-c to Parse (database)

    - by Phil
    I'm starting out thinking out how best to code my latest game on iOS, and I want to use parse.com to store pretty much everything, so I can easily change things. What I'm not sure about is how to get static data into parse, or at least the best method. I've read about using NSMutableDictionary, pLists, JSON, XML files etc. Let's say for example in AS3 I could create a simple data object like so... static var playerData:Object = {position:{startX:200, startY:200}}; Stick it in a static class, and bingo I've got my static data to use how I see fit. Basically I'm looking for the best method to do the same thing in Obj-c, but the data is not to be stored directly in the iOS game, but to be sent to parse.com to be stored on their database there. The game (at least the distribution version) will only load data from the parse database, so however I'm getting the static data into parse I want to be able to remove it from being included in the eventual iOS file. So any ideas on the best methods to sort that? If I had longer on this project, it might be nice to use storyboards and create a simple game editor to send data to parse....actually maybe that's a good idea, it's just I'm new to obj-c and I'm looking for the most straightforward (see quickest) way to achieve things. Thanks for any advice.

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >