Search Results

Search found 33021 results on 1321 pages for 'database sessions'.

Page 27/1321 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Oracle’s New Memory-Optimized x86 Servers: Getting the Most Out of Oracle Database In-Memory

    - by Josh Rosen, x86 Product Manager-Oracle
    With the launch of Oracle Database In-Memory, it is now possible to perform real-time analytics operations on your business data as it exists at that moment – in the DRAM of the server – and immediately return completely current and consistent data. The Oracle Database In-Memory option dramatically accelerates the performance of analytics queries by storing data in a highly optimized columnar in-memory format.  This is a truly exciting advance in database technology.As Larry Ellison mentioned in his recent webcast about Oracle Database In-Memory, queries run 100 times faster simply by throwing a switch.  But in order to get the most from the Oracle Database In-Memory option, the underlying server must also be memory-optimized. This week Oracle announced new 4-socket and 8-socket x86 servers, the Sun Server X4-4 and Sun Server X4-8, both of which have been designed specifically for Oracle Database In-Memory.  These new servers use the fastest Intel® Xeon® E7 v2 processors and each subsystem has been designed to be the best for Oracle Database, from the memory, I/O and flash technologies right down to the system firmware.Amongst these subsystems, one of the most important aspects we have optimized with the Sun Server X4-4 and Sun Server X4-8 are their memory subsystems.  The new In-Memory option makes it possible to select which parts of the database should be memory optimized.  You can choose to put a single column or table in memory or, if you can, put the whole database in memory.  The more, the better.  With 3 TB and 6 TB total memory capacity on the Sun Server X4-4 and Sun Server X4-8, respectively, you can memory-optimize more, if not your entire database.   Sun Server X4-8 CMOD with 24 DIMM slots per socket (up to 192 DIMM slots per server) But memory capacity is not the only important factor in selecting the best server platform for Oracle Database In-Memory.  As you put more of your database in memory, a critical performance metric known as memory bandwidth comes into play.  The total memory bandwidth for the server will dictate the rate in which data can be stored and retrieved from memory.  In order to achieve real-time analysis of your data using Oracle Database In-Memory, even under heavy load, the server must be able to handle extreme memory workloads.  With that in mind, the Sun Server X4-8 was designed with the maximum possible memory bandwidth, providing over a terabyte per second of total memory bandwidth.  Likewise, the Sun Server X4-4 also provides extreme memory bandwidth in an even more compact form factor with over half a terabyte per second, providing customers with scalability and choice depending on the size of the database.Beyond the memory subsystem, Oracle’s Sun Server X4-4 and Sun Server X4-8 systems provide other key technologies that enable Oracle Database to run at its best.  The Sun Server X4-4 allows for up 4.8 TB of internal, write-optimized PCIe flash while the Sun Server X4-8 allows for up to 6.4 TB of PCIe flash.  This enables dramatic acceleration of data inserts and updates to Oracle Database.  And with the new elastic computing capability of Oracle’s new x86 servers, server performance can be adapted to your specific Oracle Database workload to ensure that every last bit of processing power is utilized.Because Oracle designs and tests its x86 servers specifically for Oracle workloads, we provide the highest possible performance and reliability when running Oracle Database.  To learn more about Sun Server X4-4 and Sun Server X4-8, you can find more details including data sheets and white papers here. Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software.  He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers. 

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • Use BGInfo to Build a Database of System Information of Your Network Computers

    - by Sysadmin Geek
    One of the more popular tools of the Sysinternals suite among system administrators is BGInfo which tacks real-time system information to your desktop wallpaper when you first login. For obvious reasons, having information such as system memory, available hard drive space and system up time (among others) right in front of you is very convenient when you are managing several systems. A little known feature about this handy utility is the ability to have system information automatically saved to a SQL database or some other data file. With a few minutes of setup work you can easily configure BGInfo to record system information of all your network computers in a centralized storage location. You can then use this data to monitor or report on these systems however you see fit. BGInfo Setup If you are familiar with BGInfo, you can skip this section. However, if you have never used this tool, it takes just a few minutes to setup in order to capture the data you are looking for. When you first open BGInfo, a timer will be counting down in the upper right corner. Click the countdown button to keep the interface up so we can edit the settings. Now edit the information you want to capture from the available fields on the right. Since all the output will be redirected to a central location, don’t worry about configuring the layout or formatting. Configuring the Storage Database BGInfo supports the ability to store information in several database formats: SQL Server Database, Access Database, Excel and Text File. To configure this option, open File > Database. Using a Text File The simplest, and perhaps most practical, option is to store the BGInfo data in a comma separated text file. This format allows for the file to be opened in Excel or imported into a database. To use a text file or any other file system type (Excel or MS Access), simply provide the UNC to the respective file. The account running the task to write to this file will need read/write access to both the share and NTFS file permissions. When using a text file, the only option is to have BGInfo create a new entry each time the capture process is run which will add a new line to the respective CSV text file. Using a SQL Database If you prefer to have the data dropped straight into a SQL Server database, BGInfo support this as well. This requires a bit of additional configuration, but overall it is very easy. The first step is to create a database where the information will be stored. Additionally, you will want to create a user account to fill data into this table (and this table only). For your convenience, this script creates a new database and user account (run this as Administrator on your SQL Server machine): @SET Server=%ComputerName%.@SET Database=BGInfo@SET UserName=BGInfo@SET Password=passwordSQLCMD -S “%Server%” -E -Q “Create Database [%Database%]“SQLCMD -S “%Server%” -E -Q “Create Login [%UserName%] With Password=N’%Password%’, DEFAULT_DATABASE=[%Database%], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF”SQLCMD -S “%Server%” -E -d “%Database%” -Q “Create User [%UserName%] For Login [%UserName%]“SQLCMD -S “%Server%” -E -d “%Database%” -Q “EXEC sp_addrolemember N’db_owner’, N’%UserName%’” Note the SQL user account must have ‘db_owner’ permissions on the database in order for BGInfo to work correctly. This is why you should have a SQL user account specifically for this database. Next, configure BGInfo to connect to this database by clicking on the SQL button. Fill out the connection properties according to your database settings. Select the option of whether or not to only have one entry per computer or keep a history of each system. The data will then be dropped directly into a table named “BGInfoTable” in the respective database.   Configure User Desktop Options While the primary function of BGInfo is to alter the user’s desktop by adding system info as part of the wallpaper, for our use here we want to leave the user’s wallpaper alone so this process runs without altering any of the user’s settings. Click the Desktops button. Configure the Wallpaper modifications to not alter anything.   Preparing the Deployment Now we are all set for deploying the configuration to the individual machines so we can start capturing the system data. If you have not done so already, click the Apply button to create the first entry in your data repository. If all is configured correctly, you should be able to open your data file or database and see the entry for the respective machine. Now click the File > Save As menu option and save the configuration as “BGInfoCapture.bgi”.   Deploying to Client Machines Deployment to the respective client machines is pretty straightforward. No installation is required as you just need to copy the BGInfo.exe and the BGInfoCapture.bgi to each machine and place them in the same directory. Once in place, just run the command: BGInfo.exe BGInfoCapture.bgi /Timer:0 /Silent /NoLicPrompt Of course, you probably want to schedule the capture process to run on a schedule. This command creates a Scheduled Task to run the capture process at 8 AM every morning and assumes you copied the required files to the root of your C drive: SCHTASKS /Create /SC DAILY /ST 08:00 /TN “System Info” /TR “C:\BGInfo.exe C:\BGInfoCapture.bgi /Timer:0 /Silent /NoLicPrompt” Adjust as needed, but the end result is the scheduled task command should look something like this:   Download BGInfo from Sysinternals Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Here's your chance: MOS Feedback Sessions @OOW

    - by cwarticki
    Bring your questions, comments, concerns, opinions, recommendations, enhancement requests and any emotional outbursts!   As I travel the world and speak to thousands of customers, I receive plenty of feedback about My Oracle support.  Come hear directly from the source. Meet Dennis Reno, VP of Customer Portal Experience. The Customer Portal Experience team will host a My Oracle Support Tips and Techniques session and three roundtable feedback sessions at this year’s Oracle OpenWorld. The sessions will include a Hardware Support component, as well as best practices that are sure to benefit all My Oracle Support users. The events planned will give our users the opportunity to learn more about how the My Oracle Support customer portal adds value to the support process and to their business needs. The roundtable feedback sessions will allow customers to meet, give feedback, and share their experiences directly with the team responsible for the customer portal experience. Date Time (PT) Session Name Mon, Oct 1 01:45 PM My Oracle Support: Tips and Techniques for Getting the Best Hardware Support Possible (Session #CON9745) Tue, Oct 2 11:00 AM Roundtable - My Oracle Support General Feedback Wed, Oct 3 11:00 AM Roundtable - My Oracle Support Community Feedback Thr, Oct 4 11:00 AM Roundtable - My Oracle Support General Feedback Customers can find more information, including specific details about how to attend, by accessing My Oracle Support at OpenWorld (Article ID 1484508.1). Enjoy OpenWorld everyone! -Chris Warticki Global Customer Management

    Read the article

  • Oracle OpenWorld Key Financials Sessions

    - by Theresa Hickman
    Oracle OpenWorld is just around the corner on Sept. 19-23, 2010 at Moscone Center in San Francisco, California. There will be about 70 financial sessions across all the financials product lines: e-Business Suite, JD Edwards, PeopleSoft, and Fusion. I wanted to highlight some of the key financials sessions: Oracle E-Business Financials: Vision, Release Overview, and Product Roadmap: This session provides a comprehensive overview of Oracle's product strategy for Oracle Financials. This cornerstone session for Oracle Financials includes customer successes with Oracle Financials Release 12.1. Value of Upgrading to Release 12.1 for Oracle Financials: This session provides best practices and lessons learned from customers that have already upgraded to Release 12 and 12.1. PeopleSoft Financial Management Solutions High-Value Roadmap into Release 9.2: This session reviews the roadmap candidate ideas for Release 9.2 and discusses PeopleSoft Financials integration with Oracle solutions, such as Hyperion, Governance, Risk, and Compliance (GRC), and business intelligence products. Oracle Fusion Financials Overview: Terrance Wampler, the VP of Financials Product Strategy, and Rondy Ng, Group VP of Financial Applications Development, will discuss the key product differentiators to help customers understand the value that Oracle Fusion Financials can bring to their organizations. Answers to the Top 10 Questions About Oracle Fusion Financials: This session talks about how Oracle Fusion Financials can coexist with customers' existing investments in e-Busines Suite, PeopleSoft, and JD Edwards. It will also highlight the advantages of the Oracle Fusion technology stack, migration of existing applications to Oracle Fusion, and the role of codevelopment partners, such as Infosys. The panel will also accept questions from the attendees in order to address other questions customers may have about Oracle Fusion. In addition, the following sessions will discuss how customers who are currently using JD Edwards, PeopleSoft, and e-Business Suite can coexist with Fusion Financials without major disruption of existing applications. Customers will learn how they can adopt portions of Oracle Fusion Financials to deliver value-add functionality while maintaining and extending their current deployment of Oracle applications. Understanding Oracle Fusion Financials for JD Edwards Customers Understanding Oracle Fusion Financials for PeopleSoft Customers Understanding Oracle Fusion Financials for Oracle E-Business Suite Customers For more information and to register for OpenWorld, see www.oracle.com/openworld.

    Read the article

  • Rapid Repository – Silverlight Development

    - by SeanMcAlinden
    Hi All, One of the questions I was recently asked was whether the Rapid Repository would work for normal Silverlight development as well as for the Windows 7 Phone. I can confirm that the current code in the trunk will definitely work for both the Windows 7 Phone and normal Silverlight development. I haven’t tested V.1.0 for compatibility but V2.0 which will be released fairly soon will work absolutely fine.   Kind Regards, Sean McAlinden.

    Read the article

  • MySQL per-database replication?

    - by LucasBr
    So, my problem is interesting: we want to migrate from one server to another. We made a master-slave replication, but my boss came with the idea to make migration one database at a time. So he asked me to setup at the new server another MySQL instance, let the slave almost as-is and make the new instance be the new master incrementally, one database at a time. Is it possible, that is, can I transfer the database 'x' from old master to new master and just tell slave to synchronize 'x' at the new master from now on? I've read at this old thread ( Mysql Replication - are per-database threads possible? ) that this was not possible at that time. This can be done now? Thanks! Lucas Bracher.

    Read the article

  • Quickly revert an Oracle Database to a known state

    - by Anthony
    I would like to use Selenium to test a web application but in order to do that successfully the tests must be run against a database at a known state. The recording and running of the Selenium tests is not within the scope of this website so I'm only looking for recommendations on how best to revert the database after each test execution. Some details: current database size is 30GB however only about 4GB needs to be reverted database is Oracle 11g Standard Edition running on Windows Server 2003 the data in 6 different schemas needs to be reverted Ideally the process should be scripted so that it can be re-executed frequently and automatically via a scheduled task.

    Read the article

  • Should I move to an Azure database?

    - by Mike Flynn
    I am currently using a database from an installed instance on my web server in a VM on Azure. I was thinking about moving it to an actual Azure Database so I could load balance the web server. Does the Azure database work the same way as a straight install? Will it cost more or the same since I am just moving to a new server? Basically looking for advantages and disadvantages. I am familiar with SQL Server Management Studio.

    Read the article

  • A Primer on Migrating Oracle Applications to a New Platform

    - by Nick Quarmby
    In Support we field a lot of questions about the migration of Oracle Applications to different platforms.  This article describes the techniques available for migrating an Oracle Applications environment to a new platform and discusses some of the common questions that arise during migration.  This subject has been frequently discussed in previous blog articles but there still seems to be a gap regarding the type of questions we are frequently asked in Service Requests. Some of the questions we see are quite abstract. Customers simply want to get a grip on understanding how they approach a migration. Others want to know if a particular architecture is viable. Other customers ask about mixing different platforms within a single Oracle Applications environment.    Just to clarify, throughout this article, the term 'platform' refers specifically to operating systems and not to the underlying hardware. For a clear definition of 'platform' in the context of Oracle Applications Support then Terri's very timely article:Oracle E-Business Suite Platform SmörgåsbordThe migration process is very similar for both 11i and R12 so this article only mentions specific differences where relevant.

    Read the article

  • What do DBAs do?

    - by Jonathan Conway
    Yes, I know they administrate databases. I asked this question because I'd like to get a further insight into the kind of day-to-day duties a DBA might perform, and the real-world business problems they solve. For example: I optimized a 'products' query so that it ran 25% faster, which made the overall application faster. Is this a typical duty? Or is there more to being a DBA than simply making things faster? In what situations does DBA work involve planning and creativity?

    Read the article

  • Latest DSTv15 Timezone Patches Available for E-Business Suite

    - by Steven Chan
    If your E-Business Suite Release 11i or 12 environment is configured to support Daylight Saving Time (DST) or international time zones, it's important to keep your timezone definition files up-to-date. They were last changed in July 2010 and released as DSTv14. DSTv15 is now available and certified with Oracle E-Business Suite Release 11i and 12. Is Your Apps Environment Affected?When a country or region changes DST rules or their time zone definitions, your Oracle E-Business Suite environment will require patching if:Your Oracle E-Business Suite environment is located in the affected country or region ORYour Oracle E-Business Suite environment is located outside the affected country or region but you conduct business or have customers or suppliers in the affected country or region We last discussed the DSTv14 patches on this blog. The latest "DSTv15" timezone definition file is cumulative and includes all DST changes released in earlier time zone definition files. DSTv15 includes changes to the following timezones since the DSTv14 release:Africa/Cairo 2010 2010Egypt 2010 2010America/Bahia_Banderas 2010 2010Asia/Amman 2002Asia/Gaza 2010 2010Europe/Helsinki 1981 1982Pacific/Fiji 2011Pacific/Apia 2011Hongkong 1977 1977Asia/Hong_Kong 1977 1977Europe/Mariehamn 1981 1982

    Read the article

  • Shrink Sql Server database

    - by hani
    My SQL Server 2008 database file (.mdf) file is nearly 24 MB but the log file grown upto 15 GB. If I want to shrink database what are the important points to take into consideration? Will shrink causes any index fragmentation and does it affect my database performance?

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Installation of MS SQL Database 2005 on Windows XP Home Sp3 fails with no specific error

    - by PiotrK
    I am trying to install MS SQL Database 2005 on Windows XP Home SP3. It fails giving no reason ("Database failed to install"), then exits. I had already installed MS SQL Database Express Edition 2008 during 2005 setup. But when I removed 2008 one and tried to install 2005 afterwards it still failed with same error. The error happens when I am trying to install Dragon Age Toolset or Visual 2008 Pro edition. I do not known which else informations may be valuable as I never before encountered problems with installing MS database

    Read the article

  • Opening an oracle database crashes the service

    - by tundal45
    I am experiencing a weird issue with Oracle where the service started fine after a crash. The database mount went fine as well. However, when I issue alter database open; command, the database does not open, gives a generic cannot connect to the database error & crashes the service. Oracle support has not seen this issue before so it's pretty scary. The fact that there are no logs that give any leads as to what could be causing this is also scary. I was wondering if good folks over at Server Fault had seen something like this or have some insights on things that I could try. It's Oracle 10g running on Windows Server 2003. Thanks, Ashish

    Read the article

  • problem with crating table on phpMyAdmin database

    - by tombull89
    Hello all, I'm running a phpMyAdmin Database on my web package on a 1and1-hosted server. I've managed to set up a database in the control panel, have uploaded all to root/phpmyadmin and changed the config.ini.php file to point at 1and1's database server (because that's the way they do it). I can go to the web interface and get to the main page, but all it shows is the database name and I can't find how to create any tables. I know it's a long shot but I'm almost out of ideas. Also, 1and1 have their own phpmyadmin panel, which is pretty annoying to use, and a 1and1 webdatabase which I have barely looked at. Help and suggestions much appriciated.

    Read the article

  • How to implement Comet in database side?

    - by Morgan Cheng
    I have been searched for this question for a long time. How to implement Comet in database side? To support Comet, we'd better have a web server stack that supports asynchronous operation. So, Apache is not a option. There are some open source web server such as tornado can do asynchronous http handling. This is in web server level. In database level, how to make web server know that some event happens in database? There should be a asynchronous way to let web server know that something updated in database. Polling is not a option. Is there any example available?

    Read the article

  • Should all foreign table references use foreign key constraints

    - by TecBrat
    Closely related to: Foreign key restrictions -> yes or no? I asked a question on SO and it led me to ask this here. If I'm faced with a choice of having a circular reference or just not enforcing the restraint, which is the better choice? In my particular case I have customers and addresses. I want an address to have a reference to a customer and I want each customer to have a default billing address id and a default shipping address id. I might query for all addresses that have a certain customer ID or I might query for the address with the ID that matches the default shipping or billing address ids. I'm not sure yet how the constraints (or lack of) will effect the system as my application and it's data age.

    Read the article

  • Sign up Today for User Feedback Sessions at Oracle OpenWorld and JavaOne 2012

    - by Lionel Dubreuil
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 You’re Invited to Sign Up for Oracle Usability Feedback Sessions SIGN UP TODAY to get the most from your conference experience by participating in a usability feedback session where your expertise will help Oracle develop outstanding products and solutions. The Oracle User Experience team is conducting a Usability Evaluation on publishing and accessing Oracle Enterprise Repository content when building SOA projects in JDeveloper. We are asking Developers and Architects who build or integrate applications using SOA Suite to take a look at the interaction between JDeveloper with the Enterprise Repository.  We are looking for feedback on the interaction between JDeveloper and Oracle Enterprise Repository so that we may improve the User Interface in a future release. The feedback sessions will be conducted during the Oracle OpenWorld and JavaOne Conferences, at the Intercontinental Hotel in San Francisco, CA. Sessions will last 1 hour and will be held on Monday, October 1 through Wednesday, October 3, 2012. This event fills up quickly, and space is limited. If you are interested in participating, please send an email to [email protected] with the following information: Identification Name: _________________________________ Company Name:  _________________________ Job Title: Email: Phone Number (work, mobile, include country code): Which conference are you attending? _____Oracle OpenWorld _____JavaOne Have you ever participated in usability activities with Oracle or any of its subsidiaries? ____Yes; specify __________________________________________________ ____No Are you currently using JDeveloper? ____Yes ; specify version(s): _______________________________ ____No How long have you used JDeveloper? ____ Less than 1 year ____ 1 - 2 years ____ 3 - 4 years ____ 4 + years Are you currently using SOA features in JDeveloper? ____Yes ____No How long have you used SOA features in JDeveloper? ____ Less than 1 year ____ 1 - 2 years ____ 3 - 4 years ____ 4 + years How often do you use SOA features in JDeveloper? ____ Daily ____ 2 - 3 times a week ____ Once a week  ____ Once a month or less Briefly describe the types of SOA tasks you use JDeveloper to perform: _____________________________________ _____________________________________ Please list your availability If you know your availability; please let me know which day you would prefer to participate, Monday, Tuesday or Wednesday. Limited sessions are available on each day, and each session lasts 1 hour. Thank you for taking the time to complete this questionnaire.  It will help us match you to the best suited feedback session. Once we receive your email, we will contact you to set up a time and day for participation. You'll find more information about our on-site lab on the VoX (Voice of User Experience) blog, and on our Events page at Usable Apps. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Is there a good, free, online database application?

    - by andygrunt
    Google docs doesn't have a database app (yet) but can anyone point me to a good, free, online substitute? It'll be for simple things like a database of my DVD collection and I'd want to be able to import/export using standard file formats and add/edit fields of existing databses. By the way, I'm not interested in using a spreadsheet as a database.

    Read the article

  • Relationships in a Chen ERD

    - by Nibroc A Rehpotsirhc
    I am working on a Chen ERD to model our organizations merchandise. Our central entity is a Style. We have supplemental entities of Color and Season. I am defining our assortment as the relationship between these three entities, and this relationship itself will have attributes and is defined by the three entities which participate in the mandatory relationship. The rules are; Many Styles can be offered in a Season, and a Style can be offered in many Seasons. Within a Season, a Style can be offered in Many Colors. I then have 2 other entities, one of which I believe is a weak entity, Climate, and the other may be weak, but I am not sure, this being Transaction Channel. I am thinking of these as relationships off of a relationship? Meaning, for a given Style/Color combination offered in a Season, it can be available through 1 or more Transaction Channels. Additionally, within a season, a given Style/Color combination can be intended for 1 or more Climates. Is it valid to have relationships off of relationships? Or does this requirement dictate that I should think of this Style/Color/Season relationship as an entity itself, and define the relationships to Climate and Transaction Channel off of this entity?

    Read the article

  • DHCP server with database backend

    - by Cory J
    I have been looking around for something to replace my (ancient) ISC-DHCPd server. A DHCP server with a database backend sounds like a great idea to me, as I could then have a nice, friendly web interface to my server. Surprisingly, I can't any major open-source projects that offer this. Does anyone know of one? I have also read about modifying ISC to use a database backend...can anyone tell me if this solution is stable enough for a busy production server? Or is using a database a Bad Idea™ all together? PS - http://stackoverflow.com/questions/893887/dchp-with-database-backend looks like SO couldn't answer this old, similar question. EDIT: I am looking for something on a free OS platform, Linux or BSD. If there is something absolutely great that is Windows-only though, still interested.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >