Search Results

Search found 36072 results on 1443 pages for 'database mail'.

Page 152/1443 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Database Insider - July 2012 issue

    - by Javier Puerta
    The July issue of the Database Insider newsletter is now available. (Full newsletter here) INFORMATION INDEPTH NEWSLETTERDatabase Insider Edition - July 2012 Optimize Your Communications Applications on Oracle ExadataThe communications industry is experiencing explosive growth, which means that high-performing, scalable, and cost-effective solutions are critical for success. Learn more by accessing white papers, success stories, product information, videos, podcasts, and more. Register now. Oracle Exadata Enables Neuralitic's SevenFlow Application ImprovementsImproves application performance 3x and compression rate/scalability 10x. ElectraCard Services Runs Oracle Exadata on Oracle LinuxAchieves processing speeds greater than 12,000 TPS and increases performance 2x. Read full newsletter here

    Read the article

  • What is the best way to INSERT a large dataset into a MySQL database (or any database in general)

    - by Joe
    As part of a PHP project, I have to insert a row into a MySQL database. I'm obviously used to doing this, but this required inserting into 90 columns in one query. The resulting query looks horrible and monolithic (especially inserting my PHP variables as the values): INSERT INTO mytable (column1, colum2, ..., column90) VALUES ('value1', 'value2', ..., 'value90') and I'm concerned that I'm not going about this in the right way. It also took me a long (boring) time just to type everything in and testing writing the test code will be equally tedious I fear. How do professionals go about quickly writing and testing these queries? Is there a way I can speed up the process?

    Read the article

  • Sharing Large Database Backup Among Team

    - by MattGWagner
    I work on a team of three - five developers that work on an ASP.net web application remotely. We currently run a full local database from a recent backup on all of our machines during development. The current backup, compressed, is about 18 GB. I'm looking to see if there's an easier way to keep all of our local copies relatively fresh without each of us individually downloading the 18 GB file over HTTP from our web server on a regular basis. I guess FTP is an option, but it won't speed the process up at all. I'm familiar with torrents and the thought keeps hitting me that something like that would be effective, but I'm unsure of the security or the process.

    Read the article

  • PHP mail() function stopped working on Windows 2008 R2 IIS 7.5. Why?

    - by Karl
    PHP 5.3.13 and as noted IIS 7.5. PHP mail() was working fine until I did 3 things (at the same time). (a) added memory to the server taking it from 4gb to 5gb; (b) ran Windows Update and applied all available updates; (c) removed SQL server installation. Windows 2008 R2 SMTP server still works fine. I know this because I can drop a file in the pickup folder and the mail is delivered. This PHP test script: <?php $to='my_name@another_domain.com'; $subject='Test email using PHP'; $message='This is a test email message'. "\r\n"; $headers='From:[email protected]' . "\r\n" . 'Reply-To:[email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail($to, $subject, $message, $headers, '[email protected]'); ?> creates this entry in the PHP log file: mail() on [C:\www\pgs.com\store\admin\test_php_mail.php:1]: To: my_name@another_domain.com -- Headers: From:[email protected] Reply-To:[email protected] X-Mailer: PHP/5.3.13 PHP's mail.log. When using PHP now, I never see a file dropping on the IIS pickup folder. And on other thing, when using previouly working features on the site (such as password recovery), there is no entry made in the mail.log. (The mail log has just been setup to help solve this problem.) How do I fix this? Or at least how do I diagnose the problem? Thanks.

    Read the article

  • Database Context and Singleton injection with IoC

    - by zaitsman
    All of the below relates to a ASP.NET c# app. I have a Singleton Settings MemoryCache that reads values from database on first access and caches these, then invalidates them using SQL Service Broker message and re-reads as required. For the purposes of standard controllers, i create my Db Context in a request scope. However, this obviously means that i can't use the same context in the Settings Cache class, since that is a singleton and we have a scope collision. At the moment, i ended up with two db contexts - the Controllers get it via IoC container, whereas a Singleton just creates it's own. However, i am not satisfied with this approach (mostly due to the way i feel about two contexts, the cache doesn't set anything on the db hence concurrency is not an issue as much). What is a better way to do it?

    Read the article

  • Why use binary files to stack up different versions on DMSs?

    - by edgarator
    I've used both Liferay and Alfresco trying to use them as the Document Management System for an intranet. I noticed the following: They use the file system and the database to store files They use a GUID to name the file on the filesystem and that GUID is used as an Id in the database. The GUID-named file is a binary file The GUID-named binary file stores all versions for a given file The path for the file in the DMS doesn't match the one in the file system The URL makes reference to the GUID when a certain file is requested What I want to know is why is this, and what would be the best way of doing it. Like how to would you create the binary file (zip?), and what parts would you keep in the binary file and what parts would you store in the database (meta-data, path?). I'm assuming some of the benefits of doing it like this. As having the same URL for a file, regardless of its current document path. And having only one file even if the file has changed names over time.

    Read the article

  • php cms withouth database

    - by user1791795
    A friend of mine asked me for a easy website for him. As it was just 3 subpages, no database need or anything, I've done it quite fast with plain PHP, HTML, JS and CSS. But then another and another friend showed up. They only wanted arrange differently they navigation, some different picutures etc. So I though, is there any kind of CMS, that allows build small business website (barbershop, local groceries shop), with small list of subpages, yet allowing us to arrange look'n'feel. There's no need for the databas as content won't change, or can be stored in simple textfiles for example. Things like WordPress might be simple overkill Does anyone know such cms?

    Read the article

  • What’s your worst day? – Database Deployment Edition

    Ever had your database deployment derailed? Or, if not derailed, delayed? We’d like to hear what caused the problem and whether you think it could have been avoided. Our favourite tale of woe will win a $50 Amazon certificate. Can 41,000 DBAs really be wrong? Join 41,000 other DBAs who are following the new series from the DBA Team: the 5 Worst Days in a DBA’s Life. Part 3, As Corrupt As It Gets, is out now – read it here.

    Read the article

  • ?Oracle Database 12c????TTnn TMON??redo??????

    - by Liu Maclean(???)
    ?Oracle 11g? Data Guard?redo?????????3???????: ARCi (FAL – archived redo shipping, ping, local only archivals) NSAi (async) 12.1 name: TTnn , NSSi (sync) –– live redo shipping ????12c?? ??TTnn  ??TT00?????async ???redo??? ???????TMON????Redo transport monitor? SQL> select banner from v$version where rownum=1; BANNER -------------------------------------------------------------------------------- Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production SQL> select program,pid from v$process where program like '%TMON%' or Program like '%TT%'; PROGRAM PID ------------------------------ ---------- ORACLE.EXE (TMON) 7 ORACLE.EXE (TT00) 24 ??????? ?11g ???NSAi async redo ship?????????LGWR?????????,?????redo ????; ???12c?TTnn???redo???????LGWR? ???  ??????async redo ship ??redo??! 11g?: 12c?

    Read the article

  • ???????????

    - by OTN-J Master
    Oracle Database 12c??????”???????”????????????????????????????????????????????Oracle Database 12c????????·?????????????????? (EnterpriseZine/DB Online)???????Oracle Database 12c R1????VI? ???????·??????? (?????) ??????????????????????????????????????????????????????????Wiley??”Dummies????”???????????????????????????????????????????????????????????????????Oracle Multitenant for DUMMIES(eBook??????????????????????????????????)”Dummies”????????????????????????????????????????????????”???????”??”??????”???????????????????????····????·?1? ??Oracle Multitenant????·?2? Oracle Multitenant?????????????·?3? Oracle Multitenant?????·?4? Oracle Multitenant???????·?5? Oracle Multitenant?10??? ????????????????????????A6??????40?????????????????????????????????”Dummies”???????????????????????????????????????????(???)????8???????????????(????????????????????????????!)??????????????????????????????????????:2013?11?8?(?)?? ??:Oracle Multitenant???????????:???:??(??????):??:????????:????: ????????????????????????????????

    Read the article

  • I do I iterate the records in a database using LINQ when I have already built the LINQ DB code?

    - by Seth Spearman
    I asked on SO a few days ago what was the simplest quickest way to build a wrapper around a recently completed database. I took the advice and used sqlmetal to build linq classes around my database design. Now I am having two problems. One, I don't know LINQ. And, two, I have been shocked to realize how hard it is to learn. I have a book on LINQ (Linq In Action by Manning) and it has helped some but at the end of the day it is going to take me a couple of weeks to get traction and I need to make some progress on my project today. So, I am looking for some help getting started. Click HERE To see my simple database schema. Click HERE to see the vb class that was generated for the schema. My needs are simple. I have a console app. The main table is the SupplyModel table. Most of the other tables are child tables of the SupplyModel table. I want to iterate through each of Supply Model records. I want to grab the data for a supply model and then DoStuff with the data. And I also need to iterate through the child records, for each supply model, for example the NumberedInventories and DoStuff with that as well. I want to only iterate the SupplyModels that are have IDs that are in an array of strings containing the IDs. I need help doing this in VB rather than C# if possible. Thanks for your help. Seth

    Read the article

  • Application connection with database persist after sucessfull transaction also.

    - by anupam3m
    Hi , I am using Spring.Data.NHibernate12 on my database level.my application connection with database is not getting released. Underneath given is Dataconfiguration.xml < ?xml version="1.0" encoding="utf-8" ? < objects xmlns="http://www.springframework.net" xmlns:db="http://www.springframework.net/database" < object id="AuditLogger" type="Risco.Rsp.Ac.Audit.AuditLogger, Risco.Rsp.Ac.Audit" singleton="false" < property name="CacheSettings" ref="CacheSettings"/ < /object < object id="CacheSettings" type="Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities.UpdateEntityCacheHelper, Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities" singleton="false"/ < object type="Spring.Objects.Factory.Config.PropertyPlaceholderConfigurer, Spring.Core" < <property name="ConfigSections" value="databaseSettings"/> < < db:provider id="AMACDbProvider" provider="OracleClient-2.0" connectionString="Data Source=RISCODEVDB;User ID=amsbvt; Password=amsuser1234;"/ Risco.Rsp.Ac.AMAC.Mapping Risco.Rsp.Ac.Logging.Appenders Risco.Rsp.Ac.AMAC.CacheMappings --

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • How can I manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • How can manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • At what point is it worth using a database?

    - by radix07
    I have a question relating to databases and at what point is worth diving into one. I am primarily an embedded engineer, but I am writing an application using QT to interface with our controller. We are at an odd point where we have enough data that it would be feasible to implement a database (around 700+ items and growing) to manage everything, but I am not sure it is worth the time right now to deal with. I have no problems implementing the GUI with files generated from excel and parsed in, but it gets tedious and hard to track even with VBA scripts. I have been playing around with converting our data into something more manageable for the application side with Microsoft Access and that seems to be working well. If that works out I am only a step (or several) away from using an SQL database and using the QT library to access and modify it. I don't have much experience managing data at this level and am curious what may be the best way to approach this. So what are some of the real benefits of using a database if any in this case? I realize much of this can be very application specific, but some general ideas and suggestions on how to straddle the embedded/application programming line would be helpful. Thanks

    Read the article

  • In Python, how to make sure database connection will always close before leaving a code block?

    - by Cawas
    I want to prevent database connection being open as much as possible, because this code will run on an intensive used server and people here already told me database connections should always be closed as soon as possible. def do_something_that_needs_database (): dbConnection = MySQLdb.connect(host=args['database_host'], user=args['database_user'], passwd=args['database_pass'], db=args['database_tabl'], cursorclass=MySQLdb.cursors.DictCursor) dbCursor = dbConnection.cursor() dbCursor.execute('SELECT COUNT(*) total FROM table') row = dbCursor.fetchone() if row['total'] == 0: print 'error: table have no records' dbCursor.execute('UPDATE table SET field="%s"', whatever_value) return None print 'table is ok' dbCursor.execute('UPDATE table SET field="%s"', another_value) # a lot more of workflow done here dbConnection.close() # even more stuff would come below I believe that leaves a database connection open when there is no row on the table, tho I'm still really not sure how it works. Anyway, maybe that is bad design in the sense that I could open and close a DB connection after each small block of execute. And sure, I could just add a close right before the return in that case... But how could I always properly close the DB without having to worry if I have that return, or a raise, or continue, or whatever in the middle? I'm thinking in something like a code block, similar to using try, like in the following suggestion, which obviously doesn't work: def do_something_that_needs_database (): dbConnection = MySQLdb.connect(host=args['database_host'], user=args['database_user'], passwd=args['database_pass'], db=args['database_tabl'], cursorclass=MySQLdb.cursors.DictCursor) try: dbCursor = dbConnection.cursor() dbCursor.execute('SELECT COUNT(*) total FROM table') row = dbCursor.fetchone() if row['total'] == 0: print 'error: table have no records' dbCursor.execute('UPDATE table SET field="%s"', whatever_value) return None print 'table is ok' dbCursor.execute('UPDATE table SET field="%s"', another_value) # again, that same lot of line codes done here except ExitingCodeBlock: closeDb(dbConnection) # still, that "even more stuff" from before would come below I don't think there is anything similar to ExitingCodeBlock for an exception, tho I know there is the try else, but I hope Python already have a similar feature... Or maybe someone can suggest me a paradigm move and tell me this is awful and highly advise me to never do that. Maybe this is just something to not worry about and let MySQLdb handle it, or is it?

    Read the article

  • Web services or shared database for (game) server communication?

    - by jaaronfarr
    We have 2 server clusters: the first is made up of typical web applications backed by SQL databases. The second are highly optimized multiplayer game servers which keep all data in memory. Both clusters communicate with clients via HTTP (Ajax with JSON). There are a few cases in which we need to share data between the two server types, for example, reporting back and storing the results of a game (should ultimately end up in the database). We're considering several approaches for inter-server communication: Just share the MySQL databases between clusters (introduce SQL to the game servers) Sharing data in a distributed key-value store like Memcache, Redis, etc. Use an RPC technology like Google ProtoBufs or Apache Thrift Using RESTful web services (the game server would POST back to the web servers, for example) At the moment, we're leaning towards web services or just sharing the database. Sharing the database seems easy, but we're concerned this adds extra memory and a new dependency into the game servers. Web services provide good separation of concerns and fit with the existing Ajax we use, but add complexity, overhead and many more ways for communication to fail. Are there any other good reasons not to use one or the other approach? Which would be easier to scale?

    Read the article

  • What's the best way to access a MS Access database using PHP?

    - by Jack Roscoe
    Hi, I need to access some data from an MS Access database and retrieve some data from it using PHP. I've looked around the web, and found the following line which seems to correctly connect to the database: $conn->Open("DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=C:\wamp\www\data\MYDB.mdb"); However, I have tried to retrieve some data in the following way: $query = "SELECT pageid FROM pages_table"; $result = mysqli_query($conn, $query); $amount_of_pages = 0; if(mysqli_num_rows($result) <= 0) echo "No results found."; else while($row = mysqli_fetch_array($result, MYSQL_ASSOC)) $amount_of_pages++; And was presented with the following errors: Warning: mysqli_query() expects parameter 1 to be mysqli, object given in C:\wamp\www\data\index.php on line 19 Warning: mysqli_num_rows() expects parameter 1 to be mysqli_result, null given in C:\wamp\www\data\index.php on line 23 No results found. I don't really understand the connection to the Access database, is there something I should be doing differently? Thanks in advance for any help.

    Read the article

  • Best way to run multiple queries per second on database, performance wise?

    - by Michael Joell
    I am currently using Java to insert and update data multiple times per second. Never having used databases with Java, I am not sure what is required, and how to get the best performance. I currently have a method for each type of query I need to do (for example, update a row in a database). I also have a method to create the database connection. Below is my simplified code. public static void addOneForUserInChannel(String channel, String username) throws SQLException { Connection dbConnection = null; PreparedStatement ps = null; String updateSQL = "UPDATE " + channel + "_count SET messages = messages + 1 WHERE username = ?"; try { dbConnection = getDBConnection(); ps = dbConnection.prepareStatement(updateSQL); ps.setString(1, username); ps.executeUpdate(); } catch(SQLException e) { System.out.println(e.getMessage()); } finally { if(ps != null) { ps.close(); } if(dbConnection != null) { dbConnection.close(); } } } And my DB connection private static Connection getDBConnection() { Connection dbConnection = null; try { Class.forName(DB_DRIVER); } catch (ClassNotFoundException e) { System.out.println(e.getMessage()); } try { dbConnection = DriverManager.getConnection(DB_CONNECTION, DB_USER,DB_PASSWORD); return dbConnection; } catch (SQLException e) { System.out.println(e.getMessage()); } return dbConnection; } This seems to be working fine for now, with about 1-2 queries per second, but I am worried that once I expand and it is running many more, I might have some issues. My questions: Is there a way to have a persistent database connection throughout the entire run time of the process? If so, should I do this? Are there any other optimizations that I should do to help with performance? Thanks

    Read the article

  • Informix "Database locale information mismatch"

    - by lmmortal
    I have informix 11.5 running in my Win-2003 box and few databases running in it. System databases have locale en_us.819 My custom databases have locale en_us.57372 (UTF8). There is also application deployed to JBoss 4.0.2 which has few datasources configured for those custom databases. <local-tx-datasource> <jndi-name>InformixDS</jndi-name> <connection-url>jdbc:informix-sqli://@database.server@:@database.port@/tcs_catalog:[email protected]@</connection-url> <driver-class>com.informix.jdbc.IfxDriver</driver-class> <user-name>@database.username@</user-name> <password>@database.password@</password> <new-connection-sql>set lock mode to wait 5</new-connection-sql> <check-valid-connection-sql>select '1' from dual</check-valid-connection-sql> <metadata> <type-mapping>InformixDB</type-mapping> </metadata> I'm logged in as Administrator and when I start JBoss the following error is shown Caused by: java.sql.SQLException: Database locale information mismatch. at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:373) at com.informix.jdbc.IfxSqli.a(IfxSqli.java:3208) at com.informix.jdbc.IfxSqli.E(IfxSqli.java:3518) at com.informix.jdbc.IfxSqli.dispatchMsg(IfxSqli.java:2353) at com.informix.jdbc.IfxSqli.receiveMessage(IfxSqli.java:2269) at com.informix.jdbc.IfxSqli.executeOpenDatabase(IfxSqli.java:1786) at com.informix.jdbc.IfxSqliConnect.<init>(IfxSqliConnect.java:1327) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:501) at com.informix.jdbc.IfxDriver.connect(IfxDriver.java:254) at org.jboss.resource.adapter.jdbc.local.LocalManagedConnectionFactory.createManagedConnection(LocalManagedConnectionFactory.java:151) ... 160 more Caused by: java.sql.SQLException at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:373) at com.informix.jdbc.IfxSqli.E(IfxSqli.java:3523) ... 170 more DB_LOCALE and CLIENT_LOCALE are set to en_us.utf8 for Administrator. When I set in Server Studio DB_LOCALE and CLIENT_LOCALE to en_us.utf8 I can connect my databases. Where should I set DB_LOCALE and CLIENT_LOCALE to avoid this Database locale information mismatch error? Thanks.

    Read the article

  • How to Block a HTTP Website along with Its All Subdomain using IPTABLE

    - by netnovice
    I run a small HTTP web proxy site . We can not modify anything there in Proxy program. Few users mainly use Yahoo Web mail for Spamming and We need to block yahoo web mail access only ( complete yahoo website is also Ok) through our proxy . specially .mail.yahoo.com.. Like - we need to block URL like - http://uk-mg61.mail.yahoo.com http://in-mg61.mail.yahoo.com etc. etc. Note : We generaly open http://mail.yahoo.com in browser - but after loggin in it forwards it to Urls like above but all those are subdomain of mail.yahoo.com My target is if we can get all IP list for all available subdomain of mail.yahoo.com I can block it totally . We can only use IPTABLE ...I know using proxy itself we can check HTTP header and check Host field for .mail.yahoo.com. and block it. Solution : Follwoign what I did using IPtable . I collected IP CIDR block for yahoo mainly for yahoo web mail ( mail.yahoo.com ) as much as possible ( using linux host and whois command ) [ like 66.163.160.0/19 nd 98.136.0.0/14 etc ] and applied follwing command Like iptables -A OUTPUT -p tcp -d 66.163.160.0/19 -m state --state NEW -j DROP etc. Things are working fine. user can not access yahoo mail BUT the problem is I need to be updated with the avaialble CIDR YAHOO IP list ... I am ready to do it every week. I collected many from Net... You know theer are countles subdomain of mail.yahoo.com and seems every week Yahoo adding new IP... But what I observed some time user can bypass our rule and the reason obvously all the avaialble Ips are not entered in IPtable yet. What we need to do is enter all Ips of mail.yahoo.co But where do I find all subdomain for mail.yahoo.com I know we can get it from DNS but I must not be allowed to make DNS axfr query. Also doing reverse DNS will have performance issue. I want to know all subdomain of .mail.yahoo.c Can I get it from yahoo site. I have the list of all YAHOO smtp IP....but I need webmail Ip... ( http://public.yahoo.com/carloc/ymail.html ) Can you please share your Idea. Thank you

    Read the article

  • How to send mail from ASP.NET with IIS6 SMTP in a dedicated server?

    - by Julio César
    Hi. I'm trying to configure a dedicated server that runs ASP.NET to send mail through the local IIS SMTP server but mail is getting stuck in the Queue folder and doesn't get delivered. I'm using this code in an .aspx page to test: <%@ Page Language="C#" AutoEventWireup="true" %> <% new System.Net.Mail.SmtpClient("localhost").Send("[email protected]", "[email protected]", "testing...", "Hello, world.com"); %> Then, I added the following to the Web.config file: <system.net> <mailSettings> <smtp> <network host="localhost"/> </smtp> </mailSettings> </system.net> In the IIS Manager I've changed the following in the properties of the "Default SMTP Virtual Server". General: [X] Enable Logging Access / Authentication: [X] Windows Integrated Authentication Access / Relay Restrictions: (o) Only the list below, Granted 127.0.0.1 Delivery / Advanced: Fully qualified domain name = thedomain.com Finally, I run the SMTPDiag.exe tool like this: C:\>smtpdiag.exe [email protected] [email protected] Searching for Exchange external DNS settings. Computer name is THEDOMAIN. Failed to connect to the domain controller. Error: 8007054b Checking SOA for gmail.com. Checking external DNS servers. Checking internal DNS servers. SOA serial number match: Passed. Checking local domain records. Checking MX records using TCP: thedomain.com. Checking MX records using UDP: thedomain.com. Both TCP and UDP queries succeeded. Local DNS test passed. Checking remote domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Remote DNS test passed. Checking MX servers listed for [email protected]. Connecting to gmail-smtp-in.l.google.com [209.85.199.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Connecting to gmail-smtp-in.l.google.com [209.85.199.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt1.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt1.gmail-smtp-in.l.google.com. Connecting to gsmtp183.google.com [64.233.183.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gsmtp183.google.com. Connecting to gsmtp147.google.com [209.85.147.27] on port 25. Connecting to the server failed. Error: 10051 Failed to submit mail to gsmtp147.google.com. I'm using ASP.NET 2.0, Windows 2003 Server and the IIS that comes with it. Can you tell me what else to change to fix the problem? Thanks @mattlant This is a dedicated server that's why I'm installing the SMTP manually. EDIT: I use exchange so its a little different, but its called a smart host in exchange, but in plain SMTP service config i think its called something else. Cant remember exactly the setting name. Thank you for pointing me at the Smart host field. Mail is getting delivered now. In the Default SMTP Virtual Server properties, the Delivery tab, click Advanced and fill the "Smart host" field with the address that your provider gives you. In my case (GoDaddy) it was k2smtpout.secureserver.net. More info here: http://help.godaddy.com/article/1283

    Read the article

  • EF4 Generate Database

    - by shaneseaton
    Hi, I am trying my hardest to find the simplest way to create a basic "model first" entity framework example. However I am struggling with the actually generation of the database, particularly the running of the SQL against the database. Tools Visual Studio 2010 SQL Server 2008 Express Process Create a new class project Add a new Server-Database item (mdf) named "Database1.mdf" to the project Add an empty ADO.net Entity Model Create a simple entity (Person: Id, Name) Generate the Script selecting the Database1 connection created for me by visual studio Right click the script editor and select the "Execute SQL..." option Log in to SQLEXPRESS This is where is falls over saying it cant find a database name "Database1". The "problem" is that the SQL server has not had Database1 attached to it. I am 100% positive that Visual Studio use to attach a database to SQLExpress when it created a new database (Step 2). This appears to not be the case any more (even the beta of VS2010 did it). Can someone confirm this? or tell me how to get this to happen? Is there a way that I can modify the TSQL script to use an un-attached database. ie a file. I know I can use SQL Management Studio or sqlcmd to attach the database, but I would ideally like to avoid the solutions as I would like to see the cleanest method of just using visual studio. Ideal Solutions (in order of most prefered) Get visual studio to attach the newly created database Modify the generated SQL to point to file Thanks in advance.

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >