Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 120/356 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • RTF template migration in BIP

    - by Manoj Madhusoodanan
    When you are creating BI template through application the RTF template information will stored in XDO_LOBS table.Column LOB_CODE will store the template short code,ie the link between the template and lob. When you migrate the template through java oracle.apps.xdo.oa.util.XDOLoader make sure the rtf file name and template short code are same.Otherwise the rtf will not get attached. Eg:  Source Instance Template Short Code : XXCUST_TEMPLATE RTF Name: XXCUST_TEMPLATE_1.rtf When you migrate the above details through  XDOLoader the rtf will not get attached to template in destination instance.So make sure RTF Name should be XXCUST_TEMPLATE.

    Read the article

  • Where is Nautilus icon file located and how is it chosen?

    - by Steve
    When I plug my Garmin Nuvi 265 GPS device into my computer via a USB cable, it mounts as a drive with a blue triangle icon instead of the default gray hard drive icon. HOW does Nautilus know how to do this? After much laborious searching, I found that the icon info is stored in ~/.gconf/apps/nautilus/desktop-metadata/GARMIN@46@volume/gconf.xml -- but only when a custom icon is selected. So Where is this blue icon file? Why does Nautilus use it instead of the plain drive icon? Is there a way to have give each of my drives a custom icon -- so that when I stick in my various flash drives, they have a distinctive icon (i.e. a 'favicon.ico' file on root or such?) Using Gnome 2.30.2 on Ubuntu 10.04.

    Read the article

  • Store VOD wmi data in a database directly or use CQRS?

    - by JD01
    I need to collect Video on demand bandwidth usage every few minutes (or maybe ever few seconds) and store this in a database so users can produce graphs on bandwidth usage over a period of time (few hours, days, weeks or even possibly months). So the sort of data that will be stored will be the number of users watching videos, current server bandwidth (Mb/s), multicast bit rate etc. I am wondering whether using CQRS would be a good approach with Event sourcing as I can then rebuild my objects to create different projections (I.e. different graphs/reports etc) but then again it seems like I am introducing complexity which might not be needed. Or would it be best to just put the data directly in a database (currently using PostGres) directly and query off that? Having thought about it, my table is a form of audit log anyway, so I don't think I need event sourcing at all. Any thoughts?

    Read the article

  • Opensource package for securly allowing users to log in and provide information

    - by JTS
    I have a site written in mostly php and html. I also have a sql database of personal information like names and addresses. I would like my users to be able to log in to my website with a login I can email or snail mail to them, and view and edit their information on my database. Users can currently enter information online I and store it in my database but they can't view or edit stored information. I can add the code to do this, but when I give users the ability to view information I suddenly have a lot more security concerns. Is there an open source package to deal with allowing users to do something like this? Or is there an established convention for this? I know this is a pretty basic question, and there might be some good literature about it that I have yet to find, so if someone can just point me in the direction of some of that information, or better yet give me firsthand some information about this that would be great.

    Read the article

  • SQL Server and the XML Data Type : Data Manipulation

    The introduction of the xml data type, with its own set of methods for processing xml data, made it possible for SQL Server developers to create columns and variables of the type xml. Deanna Dicken examines the modify() method, which provides for data manipulation of the XML data stored in the xml data type via XML DML statements. Too many SQL Servers to keep up with?Download a free trial of SQL Response to monitor your SQL Servers in just one intuitive interface."The monitoringin SQL Response is excellent." Mike Towery.

    Read the article

  • Grep in a variable

    - by Ashfame
    How to do a grep in a variable? I have stored the wget output in a variable and I need to extract out some strings from it. Like the content of the variable is upgrade http://wordpress.org/download/ http://wordpress.org/wordpress-3.0.5.zip 3.0.5 en_US 4.3 4.1.2 I need to check if the string contains the word upgrade, so I can do a simple grep and then check the exit status of it by $? and proceed. But how can I get the value 3.0.5 which is actually the fourth word? And how to actually grep in a variable?

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • FGLRX Drivers Keep Crashing | "Installation Media" reads Natty even though I'm in Precise

    - by Tom Thorogood
    I recently switched back to Ubuntu after a year or so of hardly touching my Ubuntu partition, and upgraded from Natty. Every time I start up, i get the "A problem has occurred..." popup, but it won't let me report it because Precise is not in beta. The details on the report show a segfault, and going through all the details, I notice that it lists Natty under "InstallationMedia" -- I just installed these drivers, so I'm really unsure why it's saying this. I wish I could copy this entire error report, but I see no way of doing that (is it stored somewhere in /var/log?). I'm new to the Unity interface (it's why I stopped using Ubuntu to begin with, but now that I'm getting used to it I'm liking it better). Thanks.

    Read the article

  • Animating isometric sprites

    - by Mike
    I'm having trouble coming up with a way to animate these 2D isometric sprites. The sprites are stored like this: < Game Folder Root /Assets/Sprites/< Sprite Name /< Sprite Animation /< Sprite Direction /< Frame Number .png So for example, /Assets/Sprites/Worker/Stand/North-East/01.png Sprite sheets aren't really viable for this type of animation. The example stand animation is 61 frames. 61 frames for all 8 directions alone is huge, but there's more then just a standing animation for each sprite. Creating an sf::Texture for every image and every frame seems like it will take up a lot of memory and be hard to keep track of that many images. Unloading the image and loading the next one every single frame seems like it will do a lot of unnecessary work. What's the best way to handle this?

    Read the article

  • Lubuntu 13.10 selected LX games desktop now can't login

    - by user111667
    I logged out of my normal desktop and selected LX games from the dropdown list and logged back in. This led me to a black screen and now I can't see the login screen in order to change back to my normal desktop. On startup the machine boots normally, the lubuntu splash screen shows up, then it goes to a black screen with nothing on it - not even a mouse cursor. I can bring up a terminal using ctrl+alt+F2. Is there a way via terminal to change my desktop environment back to its previous state? Or alternatively, is there a file I can edit where my preferred desktop is stored? (The machine is dual-booted so I can access the Lubuntu files from LXLE which is installed on a second partition). The machine in question is a Toshiba A200 laptop.

    Read the article

  • Windows Azure CDN(Content Delivery Network)

    - by kaleidoscope
    Windows Azure CDN caches your Windows Azure blobs at strategically placed locations to provide maximum bandwidth for delivering your content to users. You can enable CDN delivery for any storage account via the Windows Azure Developer Portal. The CDN provides edge delivery only to blobs that are in public blob containers, which are available for anonymous access. Windows Azure CDN has 18 locations globally (United States, Europe, Asia, Australia and South America) and continues to expand. The benefit of using a CDN is better performance and user experience for users who are farther from the source of the content stored in the Windows Azure Blob service. In addition, Windows Azure CDN provides worldwide high-bandwidth access to serve content for popular events. Current CDN locations in US. For more details please refer to the link.  http://blogs.msdn.com/windowsazure/archive/2009/11/05/introducing-the-windows-azure-content-delivery-network.aspx Sarang

    Read the article

  • Best practices for upgrading user data when updating versions of software

    - by Javy
    In my code I check the current version of the software on launch and compare it to the version stored in the user's data file(s). If the version is newer, then I call different methods to update the old data to the newer data version, if necessary. I usually have to make a new method to convert the data with each update that changes user data in some way, and cannot remove the old ones in case there was someone who missed an update. So the app must be able to go through each method call and update their data until they get their data current. With larger data sets, this could be a problem. In addition, I recently had a brief discussion with another StackOverflow user this and he indicated he always appended a date stamp to the filename to manage data versions, although his reasoning as to why this was better than storing the version data in the file itself was unclear. Since I've rarely seen management of user data versions in books I've read, I'm curious what are the best practices for naming user data files and procedures for updating older data to newer versions.

    Read the article

  • RadCaptcha for ASP.NET AJAX audio feature available in Q1 2010

    Now that the Q1 2010 release is here, I want to bring your attention to a cool new feature in our RadCaptcha control for ASP.NET AJAX - audio support. Head on over to our online demos to see the feature in action. Enabling this on an existing CAPTCHA is easy - you just need to set the CaptchaImage-EnableCaptchaAudio property to true. Adding this feature to your site will allow blind or partially sighted people to use it as well. The audio support presents some very interesting possibilities for people who like to customize things. For example, you can replace the original audio files that come with the control (stored in the ~/App_Data/RadCaptcha/ folder in your web application) and add some custom ones - instead of hearing simply "alpha", you can make the control ask "enter the third letter in the word boat". You can also make it speak in ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Working with logout dialog box - text error

    - by aaron.kyle
    I am having a problem with the shutdown dialog box for Ubuntu 12.04. If I am logged in as any user and press shutdown, I see the box with the question 'Are you sure..." and its usual options. Shutting down when I am not logged in as a specific user, however, displayed only square boxes. An image of this error can be found here: I believe this error started a few weeks ago when i accidentally changed the group for my root system directory, so it might be a permission thing or an improperly assigned group lingering somewhere. The trouble is that I don't know where the text for this box is stored, and no idea where to begin checking. Can any one point me in the right direction?

    Read the article

  • Storing documents in DMS

    - by Shaza
    I need your opinions and suggestions about storing documents in a DMS system. I think the DMS should save its own copies of the documents, not their original path on the disk. So, the DMS should have its own space to write on. But what about the way they're stored? Do they have their own extension different from the original one? What about the algorithm that stores them? What about the algorithm that retrieves them? What do you suggest??

    Read the article

  • Whats a good structure to save and retrieve locations of images?

    - by Goot
    I got a java-ee application, where I collect informations about movies. Im my backend I provide data like the name, description, genre and a random uuid. I also got lots of related files, which are stored on a file server. Including some screenshots, the dvd or bluRay cover and video trailers. My current approach is: When saving the files to the fileserver, I retrieve the movies random uuid (which is the primary key btw.). I then rename the files screenshot_[UUID]_1, screenshot_[UUID]_2 ... etc. Now, there are lots of other ways to handle this, like saving all filenames in a database or creating a dir structure on the fileserver for every uuid and, e.g., return all images in the "[uuid]/screenshots" folder via REST. I expect about 30k requests a day, so the service has to be pretty performant. Whats the best way to solve this?

    Read the article

  • Does my approach for building a real time monitoring system make sense? [closed]

    - by sameer
    I am developing an application that will display a dashboard that will display data from different SQL databases. This needs to happen in almost real time, our refresh time is about 5 minutes. My approach so far is: Develop a Windows service to accumulate the data from various SQL Server instances. Persist those details into a SQL DB, from which the dashboard will display them on the web page. Trigger fetching of data from the Windows service will every x minutes. The details of the SQL Server instances will be stored in the SQL DB which the Windows service will be referring. Does my approach make sense?

    Read the article

  • How do I get the point coords of a rotated SFML shaperect?

    - by user15498
    I am trying to get collisions of bullets working, and am using SFML. I am using code to get the position of the points of the rectangle for collisions, however I think there's a way to do this without having to get points but by simply getting the points from SFML, since the shape is a rectangle and the points are stored in that way. Is there a way to do that? Through a combination of getPoint() and getGlobalBounds() maybe? While on this topic, is it better to use shapeRects or sprites? I used to only use sprites, however with the addition of textures and more low level stuff I think it would be best to switch to using rectangles and setting their size.

    Read the article

  • Is there a media player that works on HTTPS sites?

    - by Iain Hallam
    I'm currently using Yahoo! Media Player for a site that needs to play MP3 files that are stored on our server. In total, there's quite a bit more than the free limits at Soundcloud, but each file is only a few minutes long. YMP is pretty good, but causes security warnings on HTTPS pages, because it can only be served via HTTP. Is there an equivalent free player I can embed for the HTTPS pages? EDIT: Just to clarify, I'm initially looking for something that will scan the page and turn media links playable.

    Read the article

  • Intro to Sessions in ASP.Net

    Sessions are used to pass the value from one page to another with no effort from the user. With a session, if the user inputs values on the original page and you need to access them on another page, you can retrieve the values stored in the session without making the user submit those values again. Sessions are important to any user-related authentication (if you're using the https protocol), user-related validation and customization of visitor experiences in your website. This tutorial will use Visual Basic to illustrate ASP.NET sessions, though the code can be converted to equivalent C# code...

    Read the article

  • How to make audio and video streaming servers work?

    - by Santosh Linkha
    I am PHP MySQL developer and I am interested in the way television and radio are broadcasted over Internet live. I want to know how it works and and what are its requirements (which package of which programming language offers the best). And please clarify me: Websites are stored in servers. From my desktop, if I want to broadcast some video, then I need to connect to webserver (to upstream the video). Is there an application to do that (or do I have to code that or embed in my web application and which programming language would be suitable (does Python support that))? And I also need a script to handle the upstreamed video or audio (can I do that with PHP)?

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Looking for a non-cryptographic hash function that returns a single character

    - by makerofthings7
    Suppose I have a dictionary of ASCII words stored in uppercase. I also want to save those words into separate files so that the total word count of each file is approximately the same. By simply looking at the word I need to know which file it should be in (if it's there at all). Duplicate words should go into the same file and overwrite the last one. My first attempt at solving this problem is to use .NET's object.GetHashCode() function and .Trim() to get one of the "random" characters that pop up. I asked a similar question here If I only use one character of object.GetHashCode() I would get a hash code character of A..Z or 0..9. However saving the result of GetHashCode to disk is a no-no so I need a substitute. Question: What algorithm (or subset of an algorithm) is appropriate for pigeonholing strings into a single character or range of characters (Like hex 0..F offers 16 chars)? Real world usage: I'll use this answer to modify the Partition key used in Azure Table storage as described here

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >