Search Results

Search found 36072 results on 1443 pages for 'database mail'.

Page 4/1443 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • storing data for maps database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • Oracle Database 12c: Oracle Multitenant Option

    - by hamsun
    1. Why ? 2. What is it ? 3. How ? 1. Why ? The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases. 2. What ? Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes. The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views). There are local users, which are known in specific pluggable databases and common users known in all containers. Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery. 3. How ? Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state: CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE …. And of course this can also be achieved through Enterprise Manager Cloud. A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases. There are 4 ways to create other pluggable databases: 1. Create an empty pdb from seed 2. Plug in a non-CDB 3. Move a pdb from another pdb 4. Copy a pdb from another pdb We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available : 1. Create an empty pdb and use Datapump in traditional export/import mode or with Transportable Tablespace or Database mode. This method is suitable for pre 12c databases. 2. Create an empty pdb and use GoldenGate replication. When the pdb catches up with the non-CDB, you fail over to the pdb. 3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package. This is a demonstration for method 3: Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database. The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles. Step 2: Check if the non-CDB is pluggable in the multitenant container database: Step 3: Create the pluggable database, connected to the Multitenant container database. With nocopy option the files will be reused, but the tempfile is created anew: A service is created and registered automatically with the listener: Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace: To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb. Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes. Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode. Step 6: Verify tablespaces and users: There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl. This method of creating plugged_in non_CDB from is fast and easy for 12c databases. The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database. About the Author: Gerlinde has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Open Source .Net Object Database or Document Database for use in Hosted environment

    - by runxc1 Bret Ferrier
    I am looking at creating a web site and I want to try and learn either a Object Database or a Document Database. I am going to be using a hosting provider so I won't be able to install any software. I am unable to purchase any licensing so I need to be able use either a free or open source Object/Document Database. Are there any free Object/Document Databases that don't require installation of some sort?

    Read the article

  • Database per application VS One big database for all applications

    - by Jorge Vargas
    Hello, I'm designing a few applications that will share 2 or 3 database tables and all of the other tables will be independent of each app. The shared databases contain mostly user information, and there might occur the case where other tables need to be shared, but that's my instinct speaking. I'm leaning over the one database for all applications solution because I want to have referential integrity, and I won't have to keep the same information up to date in each of the databases, but I'm probably going to end with a database of 100+ tables where only groups of ten tables will have related information. The database per application approach helps me keep everything more organized, but I don't know a way to keep the related tables in all databases up to date. So, the basic question is: which of both approaches do you recommend? Thanks, Jorge Vargas.

    Read the article

  • SMTP server to deliver ALL mail to user@localhost

    - by cam8001
    I'd like to configure an SMTP MTA to accept all mail addressed to any domain and deliver it to my local user account. It would be very useful for debugging mail sent in some code I'm working on. I'll be running the server locally - no outside world interaction required. To be clear: [email protected] - delivered to - cam8001@localhost [email protected] - delivered to - cam8001@localhost [email protected] - delivered to - cam8001@localhost

    Read the article

  • Windows 7 Phone Database Rapid Repository – V2.0 Beta Released

    - by SeanMcAlinden
    Hi All, A V2.0 beta has been released for the Windows 7 Phone database Rapid Repository, this can be downloaded at the following: http://rapidrepository.codeplex.com/ Along with the new View feature which greatly enhances querying and performance, various bugs have been fixed including a more serious bug with the caching that caused the GetAll() method to sometimes return inconsistent results (I’m a little bit embarrased by this bug). If you are currently using V1.0 in development, I would recommend swapping in the beta immediately. A full release will be available very shortly, I just need a few more days of testing and some input from other users/testers.   *Breaking Changes* The only real change is the RapidContext has moved under the main RapidRepository namespace. Various internal methods have been actually made ‘internal’ and replaced with a more friendly API (I imagine not many users will notice this change). Hope you like it Kind Regards, Sean McAlinden

    Read the article

  • Cloning A Database On The Same Server Using Rman Duplicate From Active Database

    - by alejandro.vargas
    To clone a database using Rman we used to require an existing Rman backup, on 11g we can clone databases using the "from active" database option. In this case we do not require an existing backup, the active datafiles will be used as the source for the clone. In order to clone with the source database open it must be on archivelog mode. Otherwise we can make the clone mounting the source database, as shown in this example. These are the steps required to complete the clone: Configure The Network Create A Password File For The New Database Create An Init.Ora For The New Database Create The Admin Directory For The New Database Shutdown And Startup Mount The Source Database Startup Nomount The New Database Connect To The Target (Source) And Auxiliary (New Clone) Databases Using Rman Execute The Duplicate Command Remove The Old Pfile Check The New Database A step by step example is provided on this file: rman-duplicate-from-active-database.pdf

    Read the article

  • NoSQL as file meta database

    - by fga
    I am trying to implement a virtual file system structure in front of an object storage (Openstack). For availability reasons we initially chose Cassandra, however while designing file system data model, it looked like a tree structure similar to a relational model. Here is the dilemma for availability and partition tolerance we need NoSQL, but our data model is relational. The intended file system must be able to handle filtered search based on date, name etc. as fast as possible. So what path should i take? Stick to relational with some indexing mechanism backed by 3 rd tools like Apache Solr or dig deeper into NoSQL and find a suitable model and database satisfying the model? P.S: Currently from NoSQL Cassandra or MongoDB are choices proposed by my colleagues.

    Read the article

  • Grails Mail port configuration

    - by bsreekanth
    Hello, I am trying to send mail through grails mail plugin. I configured according to the documentation, and also followed few blog posts (http://blog.lourish.com/2010/04/02/sending-asynchronous-html-email-in-grails-with-activemq-jms-and-gmail/). That post mention that the closure way of declaring the configuration overrides others, but not true. Anyway I tried both approach, but seems like the port is still use the smtp default one. I get the below exception. exception: org.springframework.mail.MailSendException: Mail server connection failed; nested exception is javax.mail.MessagingException: Could not connect to SMTP host: localhost, port: 25; nested exception is: java.net.ConnectException: Connection refused: connect Now, I wrote a small program directly using the java mail library, and I could send the mail with that. The configuration is shown below. tried additional config "mail.smtp.port":"465"", but no change.. used the parameters mentioned in the above blog post, result same grails { mail { host = "smtp.gmail.com" port = "465" username = "[email protected]" password = "mypwd" props = ["mail.smtp.auth":"true", // "mail.smtp.port":"465", "mail.smtp.socketFactory.port":"465", "mail.smtp.socketFactory.class":"javax.net.ssl.SSLSocketFactory", "mail.smtp.socketFactory.fallback":"false"] } } thanks in advance.. Update: It is not port or firewall config, as when I made a grails application from scratch, and tried with the same config, everything works. Also, asked in grails forum http://grails.1312388.n4.nabble.com/grails-mail-mailSender-does-not-have-config-values-td2237704.html#a2237704 . Hope get a lead to try.

    Read the article

  • Extracting Mail from Microsoft Exchange server 2007 through IMAPS in java

    - by abhishekgem84
    props.put("mail.debug", "true"); props.setProperty("mail.store.protocol","imaps"); props.setProperty("mail.imaps.auth.plain.disable","false"); props.setProperty("mail.imaps.host","Mail3.connect.com"); props.setProperty("mail.imaps.port","135"); props.setProperty("mail.imaps.user","test"); props.setProperty("mail.imaps.pwd","123"); props.setProperty("mail.imaps.ssl.protocols","SSL"); props.setProperty("mail.imaps.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.setProperty("mail.imaps.socketFactory.fallback", "false"); props.setProperty("mail.imaps.socketFactory.port", "135"); i have done all this but it still says javax.mail.AuthenticationFailedException: failed to connect, no password specified? kindly help me out thanks

    Read the article

  • SQL 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. The instances are running on different nodes of the same cluster, although I have tried having them both on the same node with identical results. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • SSMTP to forward root@localhost mail

    - by Redconnection
    I would like to forward mail that gets sent root@localhost on multiple servers to our company admin account (e-mail is hosted on gmail) I have installed ssmtp on centos 5.5 via yum and configured it. i've also changed the last line in /etc/aliases to reflect where mail to root should go to. I've then tried sending mail to root - this gets delivered without a problem (mail -v root) I've also tried sending mail to root@localhost - this is not delivered to the specified gmail account.

    Read the article

  • why my zimbra mail servr mail are going into spam flder of yahoo, hotmail etc

    - by sadiq
    hi friends, All mails from my new zimbra mail server are going into spam and junk folder of yahoo or hotmail.... any suggestion to delver them direct into inbox... below is header part of my mail from yahoo... X-Virus-Scanned: amavisd-new at X-Spam-Flag: NO X-Spam-Score: -1.963 X-Spam-Level: X-Spam-Status: No, score=-1.963 tagged_above=-10 required=6.6 tests=[AWL=-0.083, BAYES_00=-2.599, RCVD_IN_SORBS_WEB=0.619, RDNS_NONE=0.1] autolearn=no Received: from mail.sara.co.in ([127.0.0.1]) by localhost (mail.sara.co.in [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QLBlyaY6ENGi; Fri, 19 Mar 2010 16:52:09 +0530 (IST) Received:from mail.sara.co.in (mail.sara.co.in [192.168.1.1]) by mail.sara.co.in (Postfix) with ESMTP id 0FC6C3538001; Fri, 19 Mar 2010 16:52:08 +0530 (IST) Date: Fri, 19 Mar 2010 16:52:08 +0530 (IST)

    Read the article

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Claws Mail: Mail with Attitude

    <b>Linux Magazine:</b> "When other mailers aren&#8217;t doing the trick, it&#8217;s time to break out Claws: An extremely configurable and extensible GUI mailer that gives you all the control you&#8217;d ever want over your mail without sacrificing ease of use."

    Read the article

  • database----database normalization

    - by runeveryday
    someone told me the following table isn't fit for the second database normalization. but i don't know why? i am a newbie of database design, i have read some tutorials of the 3NF. but to the 2NF and 3NF, i can't understand them well. expect someone can explain it for me. thank you, +------------+-----------+-------------------+ pk pk row +------------+-----------+-------------------+ A B C +------------+-----------+-------------------+ A D C +------------+-----------+-------------------+ A E C +------------+-----------+-------------------+

    Read the article

  • SQL SERVER – T-SQL Script to Take Database Offline – Take Database Online

    - by pinaldave
    Blog reader Joyesh Mitra recently left a comment to one of my very old posts about SQL SERVER – 2005 Take Off Line or Detach Database, which I have written focusing on taking the database offline. However, I did not include how to bring the offline database to online in that post. The reason I did not write it was that I was thinking it was a very simple script that almost everyone knows. However, it seems to me that there is something I found advanced in this procedure that is not simple for other people. We all have different expertise and we all try to learn new things, so I do not see any reason as to not write about the script to take the database online. -- Create Test DB CREATE DATABASE [myDB] GO -- Take the Database Offline ALTER DATABASE [myDB] SET OFFLINE WITH ROLLBACK IMMEDIATE GO -- Take the Database Online ALTER DATABASE [myDB] SET ONLINE GO -- Clean up DROP DATABASE [myDB] GO Joyesh let me know if this answers your question. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • 4?????????????(Database??)

    - by rika.tokumichi
    ???????????OTN????????? ???????5???????????????????????????????????????? ?????????????????????????????????? ????????????????????????????????????????????????????? ????????????????????????????????????^^ ???Database??????????????4?????????????????????????????????? ??????????? 1?:Oracle Database 11g Release 2?Download? 2?:Oracle Database 10g Express Edition?Download? 3?:Oracle SQL Developer 2.1 (2.1.0.63.73)?Download? 4?:Oracle Database 11g Release 1?Download? 5?:Oracle Database 10g Release 2?Download? (????4?1?~4?30?) ??????????Oracle Database 11g Release2?Windows?????????????????? Oracle Database 11g Release 2?4?????????! Oracle Database 11g Release 2 5??1? ? Oracle Database 10g Express Edition 3??2? ? Oracle SQL Developer 1??3? ? Oracle Database 11g Release 1 2??4? ? Oracle Database 10g Release 2 4??5? ? ???Oracle Database 11g Release2?Windows???????????? ???: >11g R2 on Windows???????! ???:???????????GUI??????/???????????????? >?????!? Oracle Database 11g Release2 - Windows? ???????????(PDF) ????:Oracle Database 11g R2?????????????????????6?15???!! >Oracle Database 11g R2 Windows? ??????!??????????? ???????????

    Read the article

  • Postfix mail forwarder

    - by Andrew
    Hello, I just bought a dedicated server and I'm trying to install a webserver on it. The server is Ubuntu 10.04. I installed ftp, nginx, php, mysql, bind and now I have to install mail server. For the mail server I'm using Postfix, because it's recomended on ubuntu. I installed Postfix with apt-get install postfix but mail() function from php wasn't working. After a little debug I found the way to solve this : I created an empty file /etc/postfix/main.cf and it worked good. I do have a mx record like this mail 5M IN A xxx.xxx.xxx.xxx example.com. 5M IN MX 1 mail.example.com. After that I wanted to forward all e-mails to my GMail address. So I googled for it and I found in the official docs Virtual Domain Host Forwarding I added these lines in main.cf virtual_alias_domains = example.com virtual_alias_maps = hash:/etc/postfix/virtual I created map file and I placed this line in it @example.com [email protected] I run in terminal postmap /etc/postfix/virtual postfix reload The result: I can send e-mail from php with mail() function, but when I send an email to [email protected] that e-mail isn't forwarded to my Gmail. How to solve this? -Andrew I also tried this but not working http://rackerhacker.com/2006/12/26/postfix-virtual-mailboxes-forwarding-externally/ It works now! But I don't know what the problem was. I just installed "Mail Server" from Tasksel and after that it worked fine. Can anyone tell me what Tasksel installed or that it changed ?

    Read the article

  • Instant database snapshot

    - by raj
    My product uses oracle 9 database in its backend. every week the new release of the product is launched which will want to fire some DML, DDL queries to the database. I usually test the product release in a dummy database before applying it in the main database. I create a database dump using exp command, then import them into dummy database using imp. then i test the product in the dummy database and checks if there are any errors. This exp and imp takes about 3 hours to complete. Is there any alternative as : instant snapshot of the live database (which will be independent of the live one)? or is there any option to keep dummydatabase in sync with the originl database always. Yhis can be done by making the product firing DML&DDL queries to both the databases.. but this will be a HUGE performance problem.. how can i overcome this?

    Read the article

  • How to 'move' mail from Inbox to custom mail folder? (Mac Mail client + Gmail account)

    - by user27779
    I'm using Mail on Mac OS X with Gmail account. I can make a new mailbox folder, and mails can be dragged into there. But, the mails still remain in Inbox. It did not moved. It just copied. (well, it's 'label' feature of Gmail, I know.) This makes crazy. I can't separate mails for each purpose. Is there any way to make Mail application 'move' mails into my custom box (remove from Inbox)?

    Read the article

  • SQL SERVER – guest User and MSDB Database – Enable guest User on MSDB Database

    - by pinaldave
    I have written a few articles recently on the subject of guest account. Here’s a quick list of these articles: SQL SERVER – Disable Guest Account – Serious Security Issue SQL SERVER – Force Removing User from Database – Fix: Error: Could not drop login ‘test’ as the user is currently logged in. SQL SERVER – Detecting guest User Permissions – guest User Access Status One of the advices which I gave in all the three blog posts was: Disable the guest user in the user-created database. Additionally, I have mentioned that one should let the user account become enabled in MSDB database. I got many questions asking if there is any specific reason why this should be kept enabled, questions like, “What is the reason that MSDB database needs guest user?” Honestly, I did not know that the concept of the guest user will create so much interest in the readers. So now let’s turn this blog post into questions and answers format. Q: What will happen if the guest user is disabled in MSDB database? A:  Lots of bad things will happen. Error 916 - Logins can connect to this instance of SQL Server but they do not have specific permissions in a database to receive the permissions of the guest user. Q: How can I determine if the guest user is enabled or disabled for any specific database? A: There are many ways to do this. Make sure that you run each of these methods with the context of the database. For an example for msdb database, you can run the following code: USE msdb; SELECT name, permission_name, state_desc FROM sys.database_principals dp INNER JOIN sys.server_permissions sp ON dp.principal_id = sp.grantee_principal_id WHERE name = 'guest' AND permission_name = 'CONNECT' There are many other methods to detect the guest user status. Read them here: Detecting guest User Permissions – guest User Access Status Q: What is the default status of the guest user account in database? A: Enabled in master, TempDb, and MSDB. Disabled in model database. Q: Why is the default status of the guest user disabled in model database? A: It is not recommended to enable the guest in user database as it can introduce serious security threat. It can seriously damage the database if configured incorrectly. Read more here: Disable Guest Account – Serious Security Issue Q: How to disable guest user? A: REVOKE CONNECT FROM guest Q: How to enable guest user? A: GRANT CONNECT TO guest Did I miss any critical question in the list? Please leave your question as a comment and I will add it to this list. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • unable to send mail from postfix on Ubuntu 12.04

    - by gilmad
    I'm trying to send an email through Google from my localhost. (via PHP5.3) But Google keeps on blocking my requests. I tried to follow the solutions given to a few similar questions, but for some reason they do not work. I followed these instructions to configure it - http://www.dnsexit.com/support/mailrelay/postfix.html Now for the config data: my main.cf file looks like that: relayhost = [smtp.gmail.com]:587 smtp_fallback_relay = [relay.google.com] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = my sasl_passwd looks like that: [smtp.gmail.com]:587 [email protected]:password and that is how the mail.log rows look like: Dec 14 10:24:50 COMP-NAME postfix/pickup[5185]: 1C3987E0EDD: uid=33 from= Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: 1C3987E0EDD: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: from=, size=483, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/smtp[5501]: 1C3987E0EDD: to=, relay=smtp.gmail.com[173.194.70.109]:587, delay=0.61, delays=0.19/0/0.32/0.1, dsn=5.7.0, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530 5.7.0 Must issue a STARTTLS command first. w3sm8024250eel.17 (in reply to MAIL FROM command)) Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: C20677E0EDE: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/bounce[5502]: 1C3987E0EDD: sender non-delivery notification: C20677E0EDE Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: C20677E0EDE: from=<, size=2532, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: removed

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >