Search Results

Search found 34093 results on 1364 pages for 'database architecture'.

Page 38/1364 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Class architecture, no friends allowed

    - by Captain Comic
    The question of why there are no friends in C# has been extensively discussed. I have the following design problems. I have a class that has only one public function AddOrder(Order ord). Clients are allowed to call only this function. All other logic must be hidden. Order class is listening to market events and must call other other function of TradingSystem ExecuteOrder, so I have to make it public as well. Doing that I will allow clients of Trading system to call this function and I don't want that. class TradingSystem { // Trading system stores list of orders List<Order> _orders; // this function is made public so that Order can call ir public ExecuteOrder(Order ord) { } // this function is made public for external clients public AddOrder(OrderRequest ordreq) { // create order and pass it this order.OnOrderAdded(this); } } class Order { TradingSystem _ts; public void OnOrderAdded(TradingSystem ts) { _ts = ts; } void OnMarketEvent() { _ts.ExecuteOrder() } }

    Read the article

  • Ideal way/architecture to deliver large data over Web Services

    - by zengr
    We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing. Now, the problem is, there is not 1 WS we are implementing, there is one WS which the client component hits, this initiates a series (5 more) of WSs which gather data from their respective data stores and finally provide the data back to the original WS, which then delivers the data back to the client component. So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel. So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal WS and at the same time, also delivering the data to the client component.

    Read the article

  • Do AOP violate layered architecture for enterprise apps?

    - by redzedi
    The question(as stated in the title) comes to me as recently i was looking at Spring MVC 3.1 with annotation support and also considering DDD for an upcoming project. In the new Spring any POJO with its business methods can be annotated to act as controller, all the concerns that i would have addressed within a Controller class can be expressed exclusively through the annotations. So, technically i can take any class and wire it to act as controller , the java code is free from any controller specific code, hence the java code could deal with things like checking security , starting txn etc. So will such a class belong to Presentation or Application layer ?? Taking that argument even further , we can pull out things like security, txn mgmt and express them through annotations , thus the java code is now that of the domain object. Will that mean we have fused together the 2 layers? Please clarify

    Read the article

  • Database Vault integration available

    - by Anthony Shorten
    One of the major features of Oracle Utilities Application Framework V4.1 is the provision of a base solution for integration to the Database Vault product. Database Vault is part of Oracle’s security portfolio of product and allows database user permissions to be locked down to only allow appropriate users appropriate access to the product data. By default, when you install the product database, administrators and SYSDBA users have full DML (SELECT, INSERT, UPDATE and DELETE access) to the schemas they own and in the case of the SYSDBA users, all schemas on the database. This can be perceived as an issue. Database Vault allows an additional layer of security to disable inappropriate access. In Oracle Utilities Application Framework, a prebuilt Database Vault solution has been provided to provide base DML access to product data for product users only. The solution is shipped with the database installation files and includes a set of SQL files to create, disable, enable and delete the Database Vault objects. The solution contains a Database Vault Realm, RuleSets, Rules and Command Rules that can be used as is or extended to meet site specific needs. The solution is consistent with other Database Vault solutions provided for other Oracle applications such as PeopleSoft, E-Business Suite, JD-Edwards and Siebel. Customers familiar with the database vault solutions for those products will recognize the similarities between the solutions. For more details of the solution, refer to the Database Vault Integration for Oracle Utilities Application Framework Based Products on My Oracle Support at KB Id: 1290700.1.

    Read the article

  • Database sharing/versioning

    - by DarkJaff
    Hi everyone, I have a question but I'm not sure of the word to use. My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning. Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else. They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change. For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge. Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries. This is a 2 millions and growing C++ app, so rewriting it is not possible! Thanks for any ideas that you may give us! J-F

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Oracle’s New Memory-Optimized x86 Servers: Getting the Most Out of Oracle Database In-Memory

    - by Josh Rosen, x86 Product Manager-Oracle
    With the launch of Oracle Database In-Memory, it is now possible to perform real-time analytics operations on your business data as it exists at that moment – in the DRAM of the server – and immediately return completely current and consistent data. The Oracle Database In-Memory option dramatically accelerates the performance of analytics queries by storing data in a highly optimized columnar in-memory format.  This is a truly exciting advance in database technology.As Larry Ellison mentioned in his recent webcast about Oracle Database In-Memory, queries run 100 times faster simply by throwing a switch.  But in order to get the most from the Oracle Database In-Memory option, the underlying server must also be memory-optimized. This week Oracle announced new 4-socket and 8-socket x86 servers, the Sun Server X4-4 and Sun Server X4-8, both of which have been designed specifically for Oracle Database In-Memory.  These new servers use the fastest Intel® Xeon® E7 v2 processors and each subsystem has been designed to be the best for Oracle Database, from the memory, I/O and flash technologies right down to the system firmware.Amongst these subsystems, one of the most important aspects we have optimized with the Sun Server X4-4 and Sun Server X4-8 are their memory subsystems.  The new In-Memory option makes it possible to select which parts of the database should be memory optimized.  You can choose to put a single column or table in memory or, if you can, put the whole database in memory.  The more, the better.  With 3 TB and 6 TB total memory capacity on the Sun Server X4-4 and Sun Server X4-8, respectively, you can memory-optimize more, if not your entire database.   Sun Server X4-8 CMOD with 24 DIMM slots per socket (up to 192 DIMM slots per server) But memory capacity is not the only important factor in selecting the best server platform for Oracle Database In-Memory.  As you put more of your database in memory, a critical performance metric known as memory bandwidth comes into play.  The total memory bandwidth for the server will dictate the rate in which data can be stored and retrieved from memory.  In order to achieve real-time analysis of your data using Oracle Database In-Memory, even under heavy load, the server must be able to handle extreme memory workloads.  With that in mind, the Sun Server X4-8 was designed with the maximum possible memory bandwidth, providing over a terabyte per second of total memory bandwidth.  Likewise, the Sun Server X4-4 also provides extreme memory bandwidth in an even more compact form factor with over half a terabyte per second, providing customers with scalability and choice depending on the size of the database.Beyond the memory subsystem, Oracle’s Sun Server X4-4 and Sun Server X4-8 systems provide other key technologies that enable Oracle Database to run at its best.  The Sun Server X4-4 allows for up 4.8 TB of internal, write-optimized PCIe flash while the Sun Server X4-8 allows for up to 6.4 TB of PCIe flash.  This enables dramatic acceleration of data inserts and updates to Oracle Database.  And with the new elastic computing capability of Oracle’s new x86 servers, server performance can be adapted to your specific Oracle Database workload to ensure that every last bit of processing power is utilized.Because Oracle designs and tests its x86 servers specifically for Oracle workloads, we provide the highest possible performance and reliability when running Oracle Database.  To learn more about Sun Server X4-4 and Sun Server X4-8, you can find more details including data sheets and white papers here. Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software.  He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers. 

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • Use BGInfo to Build a Database of System Information of Your Network Computers

    - by Sysadmin Geek
    One of the more popular tools of the Sysinternals suite among system administrators is BGInfo which tacks real-time system information to your desktop wallpaper when you first login. For obvious reasons, having information such as system memory, available hard drive space and system up time (among others) right in front of you is very convenient when you are managing several systems. A little known feature about this handy utility is the ability to have system information automatically saved to a SQL database or some other data file. With a few minutes of setup work you can easily configure BGInfo to record system information of all your network computers in a centralized storage location. You can then use this data to monitor or report on these systems however you see fit. BGInfo Setup If you are familiar with BGInfo, you can skip this section. However, if you have never used this tool, it takes just a few minutes to setup in order to capture the data you are looking for. When you first open BGInfo, a timer will be counting down in the upper right corner. Click the countdown button to keep the interface up so we can edit the settings. Now edit the information you want to capture from the available fields on the right. Since all the output will be redirected to a central location, don’t worry about configuring the layout or formatting. Configuring the Storage Database BGInfo supports the ability to store information in several database formats: SQL Server Database, Access Database, Excel and Text File. To configure this option, open File > Database. Using a Text File The simplest, and perhaps most practical, option is to store the BGInfo data in a comma separated text file. This format allows for the file to be opened in Excel or imported into a database. To use a text file or any other file system type (Excel or MS Access), simply provide the UNC to the respective file. The account running the task to write to this file will need read/write access to both the share and NTFS file permissions. When using a text file, the only option is to have BGInfo create a new entry each time the capture process is run which will add a new line to the respective CSV text file. Using a SQL Database If you prefer to have the data dropped straight into a SQL Server database, BGInfo support this as well. This requires a bit of additional configuration, but overall it is very easy. The first step is to create a database where the information will be stored. Additionally, you will want to create a user account to fill data into this table (and this table only). For your convenience, this script creates a new database and user account (run this as Administrator on your SQL Server machine): @SET Server=%ComputerName%.@SET Database=BGInfo@SET UserName=BGInfo@SET Password=passwordSQLCMD -S “%Server%” -E -Q “Create Database [%Database%]“SQLCMD -S “%Server%” -E -Q “Create Login [%UserName%] With Password=N’%Password%’, DEFAULT_DATABASE=[%Database%], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF”SQLCMD -S “%Server%” -E -d “%Database%” -Q “Create User [%UserName%] For Login [%UserName%]“SQLCMD -S “%Server%” -E -d “%Database%” -Q “EXEC sp_addrolemember N’db_owner’, N’%UserName%’” Note the SQL user account must have ‘db_owner’ permissions on the database in order for BGInfo to work correctly. This is why you should have a SQL user account specifically for this database. Next, configure BGInfo to connect to this database by clicking on the SQL button. Fill out the connection properties according to your database settings. Select the option of whether or not to only have one entry per computer or keep a history of each system. The data will then be dropped directly into a table named “BGInfoTable” in the respective database.   Configure User Desktop Options While the primary function of BGInfo is to alter the user’s desktop by adding system info as part of the wallpaper, for our use here we want to leave the user’s wallpaper alone so this process runs without altering any of the user’s settings. Click the Desktops button. Configure the Wallpaper modifications to not alter anything.   Preparing the Deployment Now we are all set for deploying the configuration to the individual machines so we can start capturing the system data. If you have not done so already, click the Apply button to create the first entry in your data repository. If all is configured correctly, you should be able to open your data file or database and see the entry for the respective machine. Now click the File > Save As menu option and save the configuration as “BGInfoCapture.bgi”.   Deploying to Client Machines Deployment to the respective client machines is pretty straightforward. No installation is required as you just need to copy the BGInfo.exe and the BGInfoCapture.bgi to each machine and place them in the same directory. Once in place, just run the command: BGInfo.exe BGInfoCapture.bgi /Timer:0 /Silent /NoLicPrompt Of course, you probably want to schedule the capture process to run on a schedule. This command creates a Scheduled Task to run the capture process at 8 AM every morning and assumes you copied the required files to the root of your C drive: SCHTASKS /Create /SC DAILY /ST 08:00 /TN “System Info” /TR “C:\BGInfo.exe C:\BGInfoCapture.bgi /Timer:0 /Silent /NoLicPrompt” Adjust as needed, but the end result is the scheduled task command should look something like this:   Download BGInfo from Sysinternals Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Software Architecture and Design vs Psychology of HCI class

    - by Joey Green
    I have two classes to choose from and I'm wanting to get an opinion from the more experienced game devs which might be better for someone who wants to be an indie game dev. The first is a Software Architecture and Design course and the second is a course titled Psychology of HCI. I've previously have taken a Software Design course that was focused only on design patterns. I've also taken an Introduction to HCI course. Software Architecture and Design Description Topics include software architectures, methodologies, model representations, component-based design ,patterns,frameworks, CASE-based desgins, and case studies. Psychology of HCI Description Exploration of psychological factors that interact with computer interface usablilty. Interface design techniques and usability evaluation methods are emphasized. I know I would find both interesting, but my concern is really which one might be easier to pick up on my own. I know HCI is relevant to game dev, but am un-sure if the topics in the Software Architecture class would be more for big software projects that go beyond the scope of games. Also, I'm not able to take both because the overlap.

    Read the article

  • Free Oracle Special Edition eBooks - Cloud Architecture & Enterprise Cloud

    - by Thanos
    Cloud computing can improve your business agility, lower operating costs, and speed innovation. The key to making it work is the architecture. Learn how to define your architectural requirements and get started on your path to cloud computing with the free oracle special edition e-book, Cloud Architecture for Dummies.   Topics covered in this quick reference guide include: Cloud architecture principles and guidelines Scoping your project and choosing your deployment model Moving toward implementation with vertically integrated engineered systems Learn how to architect and model your cloud implementation to drive efficiency and leverage economies of scale. For more information, visit oracle.com/cloud and our cloud services at cloud.oracle.com Specifically Infrastructure as a Service (IaaS) is critical to the success of many enterprises. Want to build a private Cloud infrastructure and cut down IT costs? Learn more about Oracle's highly integrated infrastructure software and hardware to help you architect and deploy a cloud infrastructure that is optimized for the needs of your enterprise from day one. Download the free e-book of Enterprise Cloud Infrastructure for Dummies to: Realize the benefits of consolidation with the added cloud capabilities Simplify deployments and reduce risks with tested and proven guidelines Achieve up to 50% lower TCO than comparable multi-vendor alternatives Choosing the right infrastructure technologies is essential to capitalizing on the benefits of cloud computing. Oracle Optimized Solution for Enterprise Cloud Infrastructure helps identify the right hardware and software stack and provides configuration guidelines for your cloud. With this book, you come to understand Enterprise Cloud Infrastructure and find out how to jumpstart your IaaS cloud plans. You also discover Oracle Optimized Solutions and learn how integration testing and proven best practices maximize your IT investments. In addition, you see how to architect and deploy your IaaS cloud to drive down costs and improve performance, how to understand and select the right private cloud strategy for you, what key cloud infrastructure elements are and how to use them to achieve your business goals, and more. For more information, visit oracle.com/oos.

    Read the article

  • IoT? Time for Enterprise Architecture

    - by OTN ArchBeat
    Of course you've been listening to the latest OTN ArchBeat Podcast on the challenges and opportunities in the Internet of Things. If so, you'll also be interested in ZDNet blogger Joe McKendricks' recent post, Will the 'Internet of Things' make CIOs' jobs harder?. In that post McKendrick offers this important bit of advice that will certainly have architects saying "I told you so." Enterprises need to develop architectural approaches to the management of data. Meaning the development of repeatable processes to source, ingest, transform and store information. For years, IT managers simply bought more hardware and addressed data with on-off integration projects. Now it's time for enterprise architecture. IoT is an important new phase in the evolution of enterprise IT. Challenging? You bet! But meeting any such challenge requires big, broad thinking and planning. In that context Enterprise Architecture has always been important. But as IoT gains traction and speed, enterprise architecture should be top of mind for all concerned.

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • Rapid Repository – Silverlight Development

    - by SeanMcAlinden
    Hi All, One of the questions I was recently asked was whether the Rapid Repository would work for normal Silverlight development as well as for the Windows 7 Phone. I can confirm that the current code in the trunk will definitely work for both the Windows 7 Phone and normal Silverlight development. I haven’t tested V.1.0 for compatibility but V2.0 which will be released fairly soon will work absolutely fine.   Kind Regards, Sean McAlinden.

    Read the article

  • MySQL per-database replication?

    - by LucasBr
    So, my problem is interesting: we want to migrate from one server to another. We made a master-slave replication, but my boss came with the idea to make migration one database at a time. So he asked me to setup at the new server another MySQL instance, let the slave almost as-is and make the new instance be the new master incrementally, one database at a time. Is it possible, that is, can I transfer the database 'x' from old master to new master and just tell slave to synchronize 'x' at the new master from now on? I've read at this old thread ( Mysql Replication - are per-database threads possible? ) that this was not possible at that time. This can be done now? Thanks! Lucas Bracher.

    Read the article

  • Sessions I Submitted to the PASS Summit 2010

    - by andyleonard
    Introduction I'm borrowing an idea and blog post title from Brent Ozar ( Blog - @BrentO ). I am honored the PASS Summit 2010 (Seattle, 8 - 11 Nov 2010) would consider allowing me to present. It's a truly awesome event. If you have an opportunity to attend and read this blog, please find me and introduce yourself. If you've built a cool solution to a business or technical problem; or written a script - or a bunch of scripts - to automate part of your daily / weekly / monthly routine; or have some...(read more)

    Read the article

  • Quickly revert an Oracle Database to a known state

    - by Anthony
    I would like to use Selenium to test a web application but in order to do that successfully the tests must be run against a database at a known state. The recording and running of the Selenium tests is not within the scope of this website so I'm only looking for recommendations on how best to revert the database after each test execution. Some details: current database size is 30GB however only about 4GB needs to be reverted database is Oracle 11g Standard Edition running on Windows Server 2003 the data in 6 different schemas needs to be reverted Ideally the process should be scripted so that it can be re-executed frequently and automatically via a scheduled task.

    Read the article

  • Should I move to an Azure database?

    - by Mike Flynn
    I am currently using a database from an installed instance on my web server in a VM on Azure. I was thinking about moving it to an actual Azure Database so I could load balance the web server. Does the Azure database work the same way as a straight install? Will it cost more or the same since I am just moving to a new server? Basically looking for advantages and disadvantages. I am familiar with SQL Server Management Studio.

    Read the article

  • What do DBAs do?

    - by Jonathan Conway
    Yes, I know they administrate databases. I asked this question because I'd like to get a further insight into the kind of day-to-day duties a DBA might perform, and the real-world business problems they solve. For example: I optimized a 'products' query so that it ran 25% faster, which made the overall application faster. Is this a typical duty? Or is there more to being a DBA than simply making things faster? In what situations does DBA work involve planning and creativity?

    Read the article

  • Latest DSTv15 Timezone Patches Available for E-Business Suite

    - by Steven Chan
    If your E-Business Suite Release 11i or 12 environment is configured to support Daylight Saving Time (DST) or international time zones, it's important to keep your timezone definition files up-to-date. They were last changed in July 2010 and released as DSTv14. DSTv15 is now available and certified with Oracle E-Business Suite Release 11i and 12. Is Your Apps Environment Affected?When a country or region changes DST rules or their time zone definitions, your Oracle E-Business Suite environment will require patching if:Your Oracle E-Business Suite environment is located in the affected country or region ORYour Oracle E-Business Suite environment is located outside the affected country or region but you conduct business or have customers or suppliers in the affected country or region We last discussed the DSTv14 patches on this blog. The latest "DSTv15" timezone definition file is cumulative and includes all DST changes released in earlier time zone definition files. DSTv15 includes changes to the following timezones since the DSTv14 release:Africa/Cairo 2010 2010Egypt 2010 2010America/Bahia_Banderas 2010 2010Asia/Amman 2002Asia/Gaza 2010 2010Europe/Helsinki 1981 1982Pacific/Fiji 2011Pacific/Apia 2011Hongkong 1977 1977Asia/Hong_Kong 1977 1977Europe/Mariehamn 1981 1982

    Read the article

  • Shrink Sql Server database

    - by hani
    My SQL Server 2008 database file (.mdf) file is nearly 24 MB but the log file grown upto 15 GB. If I want to shrink database what are the important points to take into consideration? Will shrink causes any index fragmentation and does it affect my database performance?

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Installation of MS SQL Database 2005 on Windows XP Home Sp3 fails with no specific error

    - by PiotrK
    I am trying to install MS SQL Database 2005 on Windows XP Home SP3. It fails giving no reason ("Database failed to install"), then exits. I had already installed MS SQL Database Express Edition 2008 during 2005 setup. But when I removed 2008 one and tried to install 2005 afterwards it still failed with same error. The error happens when I am trying to install Dragon Age Toolset or Visual 2008 Pro edition. I do not known which else informations may be valuable as I never before encountered problems with installing MS database

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >