Search Results

Search found 14956 results on 599 pages for 'mysql dba'.

Page 440/599 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • [PHP, CSS, & ?] fixed width div, resizing text on the fly based on length

    - by Andrew Heath
    Let's say you've got a simple fixed-width layout that pulls a title from a MySQL database. CSS: #wrapper { width: 800px; } h1 { width: 100%; } HTML: <html> <body> <div id="wrapper"> <h1> $titleString </h1> </div> </body> </html> But the catch is, the length of the title string pulled from your MySQL database varies wildly. Sometimes it might be 10 characters, sometimes it might be 80. It's possible to establish a min & max character count. How, if at all possible, do I get the text-size of my <h1>$titleString</h1> to enlarge/decrease on-the-fly such that the string is only ever on one line and best fit to that line length? I've seen a lot of questions about resizing the div - but in my case the div must always be 100% (800px) and I want to best-fit the title. Obviously a maximum text-size value would have to be set so 5 character strings don't become gargantuan. Does anyone have a suggestion? I'm only using PHP/MySQL/CSS on this page at the moment, but incorporation of another language is fine if it means I can solve the problem. The only thing I can think of is a bruteforce approach whereby through trial and error I establish acceptable string character count ranges matched with CSS em sizes, but that'd be a pretty ugly implementation from the code side.

    Read the article

  • strtotime not working with TIME?

    - by Prashant
    My mysql column has this datetime value, 2011-04-11 11:00:00 when I am applying strtotime then its returning date less than today,whereas it should be greater than today. also when I am trying this strtotime(date('d/m/Y h:i A')); code, its returning wrong values. Is there any problem with giving TIME in strtotime? Basically I want to do, is to compare my mysql column date with today's date, if its in future then show "Upcoming" else show nothing? Please help and advise, what should I do? Edited code $_startdatetime = $rs['startdatetime']; $_isUpcoming = false; if(!empty($_startdatetime)){ $TEMP_strtime = strtotime($_startdatetime); $TEMP_strtime_today = strtotime(date('d/m/Y h:i A')); if($TEMP_strtime_today < $TEMP_strtime){ $_isUpcoming = true; $_startdatetime = date('l, d F, Y h:i A' ,$TEMP_strtime); } } And the value in $rs['startdatetime'] is 2011-04-11 11:00:00. And with this value I am getting following output. $TEMP_strtime - 1302519600 $TEMP_strtime_today - 1314908160 $_startdatetime - 2011-04-11 11:00:00 $_startdatetime its value is not formatted as the upcoming condition is false, so returning as is mysql value.

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • How to document an existing small web site (web application), inside and out?

    - by Ricket
    We have a "web application" which has been developed over the past 7 months. The problem is, it was not really documented. The requirements consisted of a small bulleted list from the initial meeting 7 months ago (it's more of a "goals" statement than software requirements). It has collected a number of features which stemmed from small verbal or chat discussions. The developer is leaving very soon. He wrote the entire thing himself and he knows all of the quirks and underlying rules to each page, but nobody else really knows much more than the user interface side of it; which of course is the easy part, as it's made to be intuitive to the user. But if someone needs to repair or add a feature to it, the entire thing is a black box. The code has some minimal comments, and of course the good thing about web applications is that the address bar points you in the right direction towards fixing a problem or upgrading a page. But how should the developer go about documenting this web application? He is a bit lost as far as where to begin. As developers, how do you completely document your web applications for other developers, maintainers, and administrative-level users? What approach do you use, where do you start, do you have a template? An idea of magnitude: it uses PHP, MySQL and jQuery. It has about 20-30 main (frontend) files, along with about 15 included files and a couple folders of some assets. So overall it's a pretty small application. It interfaces with 7 MySQL tables, each one well-named, so I think the database end is pretty self-explanatory. There is a config.inc.php file with definitions of consts like the MySQL user details, some from/to emails, and URLs which PHP uses to insert into emails and pages (relative and absolute paths, basiecally). There is some AJAX via jQuery. Please comment if there is any other information that would help you help me and I will be glad to edit it in.

    Read the article

  • JDBC with JSP fails to insert

    - by StrykeR
    I am having some issues right now with JDBC in JSP. I am trying to insert username/pass ext into my MySQL DB. I am not getting any error or exception, however nothing is being inserted into my DB either. Below is my code, any help would be greatly appreciated. <% String uname=request.getParameter("userName"); String pword=request.getParameter("passWord"); String fname=request.getParameter("firstName"); String lname=request.getParameter("lastName"); String email=request.getParameter("emailAddress"); %> <% try{ String dbURL = "jdbc:mysql:localhost:3306/assi1"; String user = "root"; String pwd = "password"; String driver = "com.mysql.jdbc.Driver"; String query = "USE Users"+"INSERT INTO User (UserName, UserPass, FirstName, LastName, EmailAddress) " + "VALUES ('"+uname+"','"+pword+"','"+fname+"','"+lname+"','"+email+"')"; Class.forName(driver); Connection conn = DriverManager.getConnection(dbURL, user, pwd); Statement statement = conn.createStatement(); statement.executeUpdate(query); out.println("Data is successfully inserted!"); } catch(SQLException e){ for (Throwable t : e) t.printStackTrace(); } %> DB script here: CREATE DATABASE Users; use Users; CREATE TABLE User ( UserID INT NOT NULL AUTO_INCREMENT, UserName VARCHAR(20), UserPass VARCHAR(20), FirstName VARCHAR(30), LastName VARCHAR(35), EmailAddress VARCHAR(50), PRIMARY KEY (UserID) );

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

  • Error using maven profiles

    - by user3127896
    I've added two profiles to my application and that how it looks: <profiles> <profile> <id>dev</id> <properties> <db.username>root</db.username> <db.password>root</db.password> <db.connectionURL>localhost:3306</db.connectionURL> <db.driverClass>com.mysql.jdbc.Driver</db.driverClass> </properties> </profile> <profile> <id>prod</id> <properties> <db.username>prodroot</db.username> <db.password>prodpass</db.password> <db.connectionURL>localhost:3306</db.connectionURL> <db.driverClass>com.mysql.jdbc.Driver</db.driverClass> </properties> </profile> </profiles> In my jdbc.properties file i changed values like this: jdbc.driverClassName=${db.driverClass} jdbc.dialect=org.hibernate.dialect.MySQLDialect jdbc.databaseurl=jdbc:mysql://${db.connectionURL}/dbname jdbc.username=${db.username} jdbc.password=${db.password} And here's bean from spring-container.xml <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.databaseurl}" p:username="${jdbc.username}" p:password="${jdbc.password}" /> When i try to deploy my application i got following error: SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'dataSource' defined in ServletContext resource [/WEB-INF/spring-servlet.xml]: Could not resolve placeholder 'db.driverClass' Structure of project: Any ideas what i'm doing wrong? Thanks in advance!

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Bugzilla with SSL

    - by Antitribu
    I have an install of Bugzilla I'm trying to implement SSL on. When I go into the parameters screen and edit SSLBASE adding in the full url: https://foo.com/bugzilla/ the editparams.cgi times out on loading and I have the following error in the apache log [Tue Mar 30 19:29:39 2010] [error] [client xxx.xx.xx.xx ] (70007)The timeout specified has expired: ap_content_length_filter: apr_bucket_read() failed, referer: http://foo.com/bugzilla/editparams.cgi On install I also received this error: WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. How can I force this to work? Editing other parameters (eg urlbase) work fine. The SSL site is setup and direct requests to it eg https://foo.com/bugzilla work correctly. Any ideas? Ty

    Read the article

  • How much effort is SQL Server 2008 Administration?

    - by Adrian Grigore
    Hi, I am looking for a suitable hosting environment for an ASP.NET MVC application. One of the options I have is renting a Hyper-V server and installing my license of SQL Server 2008 on it. I'm a bit wary of shared hosting since the one I have tried so far did not seem to have very consistent performance. One potential problem is that I would have I do not not know much about SQL Server administration, so I am not sure if this is a good option. I've been running a failover cluster of two linux dedicated servers for over 5 years now and MySQL never gave me any trouble. But that was Linux, and it might be different with a windows system. Is running a halfway efficient MS SQL Server 2008 difficult? Does it require any in-depth administration knowledge? Or perhaps recurring administration effort (such as keeping the server up to date with the latest patches)? Or is it rather an "install and forget" experience similar to MySQL?

    Read the article

  • How do you fix the Performance Dashboard datetime overfow error

    - by Mike L
    I'm a programmer/DBA by accident and we're running SQL Server 2005 with Performance Dashboard for basic monitoring. The server has been up for a few weeks and now we can't drill into certain reports. Is there any way to reset these reports without a complete reboot? edit: I bet the error message would help. I get this when I drill into the CPU graph: Error: Difference of two datetime columns caused overflow at runtime.

    Read the article

  • High CPU from httpd process

    - by KHWeb
    I am currently getting high CPU on a server that is just running a couple of sites with very low traffic. One of the sites is in still development going live soon. However, this site is very very slow...When browsing through its pages I can see that the CPU goes from 30% to 100% for httpd (see top output below). I have tuned httpd & MySQL, Apache Solr, Tomcat for high performance, and I am using APC. Not sure what to do from here or how to find the culprit as I have a bunch of messages on the httpd log and have been chasing dead ends for some time...any help is greatly appreciated. Server: AuthenticAMD, Quad-Core AMD Opteron(tm) Processor 2352, RAM 16GB Linux 2.6.27 64-bit, Centos 5.5 Plesk 9.5.4, MySQL 5.1.48, PHP 5.2.17 Apache/2.2.3 (CentOS) DAV/2 mod_jk/1.2.15 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5 PHP/5.2.17 mod_perl/2.0.4 Perl/v5.8.8 Tomcat6-6.0.29-1.jpp5, Tomcat-native-1.1.20-1.el5, Apache Solr top 17595 apache 20 0 1825m 507m 10m R 100.4 3.2 0:17.50 httpd 17596 apache 20 0 1565m 247m 9936 R 83.1 1.5 0:10.86 httpd 17598 apache 20 0 1430m 110m 6472 S 54.5 0.7 0:08.66 httpd 17599 apache 20 0 1438m 124m 12m S 37.2 0.8 0:11.20 httpd 16197 mysql 20 0 13.0g 2.0g 5440 S 9.6 12.6 297:12.79 mysqld 17617 root 20 0 12748 1172 812 R 0.7 0.0 0:00.88 top 8169 tomcat 20 0 4613m 268m 6056 S 0.3 1.7 6:40.56 java httpd error_log [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [info] mod_fcgid: Process manager 17593 started [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17594 for worker proxy:reverse [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 17594 for (*) [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17595 for worker proxy:reverse [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [notice] child pid 22782 exit signal Segmentation fault (11) [error] (43)Identifier removed: apr_global_mutex_lock(jk_log_lock) failed [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0x7fd29a5478c0 rmm=0x7fd29a547918 for VHOST: example.com [info] APR LDAP: Built with OpenLDAP LDAP SDK [info] LDAP: SSL support available [info] Init: Seeding PRNG with 256 bytes of entropy [info] Init: Generating temporary RSA private keys (512/1024 bits) [info] Init: Generating temporary DH parameters (512/1024 bits) [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4265 indexes [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [debug] ssl_scache_shmcb.c(623): division_offset = 96 [debug] ssl_scache_shmcb.c(625): division_size = 15997 [debug] ssl_scache_shmcb.c(627): queue_size = 2136 [debug] ssl_scache_shmcb.c(629): index_num = 133 [debug] ssl_scache_shmcb.c(631): index_offset = 8 [debug] ssl_scache_shmcb.c(633): index_size = 16 [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [debug] ssl_scache_shmcb.c(637): cache_data_size = 13853 [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory()

    Read the article

  • configuring two network interfaces in ubuntu 10.04.1

    - by Bill Smith
    I have got two NICs configured on a VM - each is tied to a specific network, one is a DMZ, the other is an internal network. I want MySQL to listen on the internal network only and Apache on the DMZ listening for HTTP and HTTPS. But as soon as I add the second interface I run into trouble. I can hit HTTP on either interface, but can not hit 3306 on the internal network for MySQL. Here's the config... could someone sanity check this please? auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.153.24.230 netmask 255.255.255.240 network 10.153.24.224 broadcast 10.153.24.239 dns-nameservers 8.8.8.8 auto eth1 iface eth1 inet static address 10.153.24.195 netmask 255.255.255.224 gateway 10.153.24.193 broadcast 10.153.23.223

    Read the article

  • Server Security

    - by mahatmanich
    I want to run my own root server (directly accessible from the web without a hardware firewall) with debian lenny, apache2, php5, mysql, postfix MTA, sftp (based on ssh) and maybe dns server. What measures/software would you recomend, and why, to secure this server down and minimalize the attack vector? Webapplications aside ... This is what I have so far: iptables (for gen. packet filtering) fail2ban (brute force attack defense) ssh (chang default, port disable root access) modsecurity - is really clumsy and a pain (any alternative here?) ?Sudo why should I use it? what is the advantage to normal user handling thinking about greensql for mysql www.greensql.net is tripwire worth looking at? snort? What am I missing? What is hot and what is not? Best practices? I like "KISS" - Keep it simple secure, I know it would be nice! Thanks in advance ...

    Read the article

  • Evaluate a Munin graph defined in munin.conf

    - by Ztyx
    Hi, I have defined an additional graph (in Munin, munin.conf) that calculates the total size of my MySQL database. The index and data sizes are extracted from an external plugin. The definition looks like this: [...] [Database;my.host.com] address my.host.com use_node_name yes dbsize.update no dbsize.graph_args --base 1024 -l 0 dbsize.graph_title Total database size dbsize.graph_vlabel bytes dbsize.graph_category mysql dbsize.graph_info The total database size. dbsize.graph_order the_sum dbsize.the_sum.sum \ my.host.com:mysql_size.index \ my.host.com:mysql_size.datas dbsize.the_sum.label data+index dbsize.the_sum.type GAUGE dbsize.the_sum.min 0 [...] Now, is it possible to extract the current value of this graph? Running # munin-run dbsize or # munin-run my.host.com:dbsize does not seem to work.

    Read the article

  • Dedicated server configuration - some tips -Xenserver/Debian

    - by Sanjay S
    I am migrating from a vps based hosting to a dedicated hosting (8GB RAM/1TB HD). I need to run multiple Drupal and Ruby based applications? what would be the recommended configuration. I was thinking of two options. 1) Install multiple Debian os on top of Xen (like VPS). Each may be 2GB Memory and run Drupal and Ruby and MYSQL on separate partitions . 2) Install one instance of Debian. and Install Drupal (Apache, php) Ruby (lighttpd, ruby) ,MySQL all in the same partition I was little worried that option 2 could lead to some performance issues later..

    Read the article

  • yum install php-tidy - no more mirrors

    - by Lylo
    Hi im trying to get the tidy extensions installed on centos running php 5.3 Thanks Downloading Packages: http://repo.webtatic.com/yum/centos/5/x86_64/php-tidy-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-pdo-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-mysql-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-gd-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-xml-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-common-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-devel-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-cli-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. Error Downloading Packages: php-tidy-5.3.5-1.w5.x86_64: failure: php-tidy-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-cli-5.3.5-1.w5.x86_64: failure: php-cli-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-5.3.5-1.w5.x86_64: failure: php-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-pdo-5.3.5-1.w5.x86_64: failure: php-pdo-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-devel-5.3.5-1.w5.x86_64: failure: php-devel-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-mysql-5.3.5-1.w5.x86_64: failure: php-mysql-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-common-5.3.5-1.w5.x86_64: failure: php-common-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-xml-5.3.5-1.w5.x86_64: failure: php-xml-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-gd-5.3.5-1.w5.x86_64: failure: php-gd-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try.

    Read the article

  • Implementing a Linux-HA based clustering setup on Windows

    - by Alex
    I have a (tried and tested) setup involving: 2x Load balancing nodes on a floating IP via Heartbeat, load balancing 2 tomcat servers. 2x Tomcat servers 2x Galera Cluster MySQL servers synchronously replicating (+1 arbitrator node) All are evenly spread across 2 physical nodes. Now, I have to somehow get the same functionality on Windows Server (2008? I think) nodes .... running under Xen virtualization. There is no possibility to use Linux for any of the nodes. I count two main problems: No Linux-HA hearbeat daemon for the load balancing No Galera synchronous replication for MySQL I freely admit to having nearly no Windows knowledge when it comes to clustering. Is there a way to closely mimic the setup I have described or is it a total write-off?

    Read the article

  • Postfix/Procmail mailing list software

    - by Jason Antman
    I'm looking for suggestions on mailing list software to use on an existing server running Postfix/Procmail. Something relatively simple. requirements: 1 list, < 50 subscribers list members dumped in a certain file by a script (being pulled from LDAP or MySQL on another box) Handles MIME, images, etc. Moderation features No subscription/unsubscription - just goes by the file or database. Mailman is far too heavy-weight, and doesn't seem to play (easily) with Postfix/Procmail. I'm currently using a PHP script that just receives mail as a user, reads a list of members from a serialized array (file dumped on box via cron on the machine with the MySQL database containing members) and re-mails it to everyone. Unfortunately, we now need moderation capabilities, and I don't quite feel like adding that to the PHP script if there's already something out there that does it. Thanks for any tips. -Jason

    Read the article

  • Configuring jdbc-pool (tomcat 7)

    - by john
    i'm having some problems with tomcat 7 for configuring jdbc-pool : i`ve tried to follow this example: http://www.tomcatexpert.com/blog/2010/04/01/configuring-jdbc-pool-high-concurrency so i have: conf/server.xml <GlobalNamingResources> <Resource type="javax.sql.DataSource" name="jdbc/DB" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/mydb" username="user" password="password" /> </GlobalNamingResources> conf/context.xml <Context> <ResourceLink type="javax.sql.DataSource" name="jdbc/LocalDB" global="jdbc/DB" /> <Context> and when i try to do this: Context initContext = new InitialContext(); Context envContext = (Context)initContext.lookup("java:/comp/env"); DataSource datasource = (DataSource)envContext.lookup("jdbc/LocalDB"); Connection con = datasource.getConnection(); i keep getting this error: javax.naming.NameNotFoundException: Name jdbc is not bound in this Context at org.apache.naming.NamingContext.lookup(NamingContext.java:803) at org.apache.naming.NamingContext.lookup(NamingContext.java:159) pls help tnx

    Read the article

  • Mac Websharing Off but Process Running

    - by benedict_w
    I have my mac setup to use php / mysql / apache via macports. Recently it has gone a bit pear shaped: it seems that the local mac versions of apache and mysql are running blocking the macports services: (48)Address already in use: make_sock: could not bind to address [::]:80 (48)Address already in use: make_sock: could not bind to address [::]:443 Web sharing in System preferences is off - How can I properly disable it? I tried turning it on an off again in System Preferences but it would not changed from off to on. Also if I kill the process it starts running again.

    Read the article

  • ubuntu mail server settings and /etc/hosts file

    - by mbrc
    This is my /etc/hosts file 127.0.0.1 localhost.localdomain localhost 127.0.1.1 ubuntu-server.xx.com ubuntu-server 193.77.xx.xx mail.xx.com mail # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters is this correct configuration for my mail server. I am behind router so i don't know if is ok to use my IP for mail.xx.com and 127.0.0.1 for localhost problem is that i can receive mail but when i send it i get Oct 17 21:29:32 ubuntu-server postfix/smtpd[2453]: warning: SASL authentication failure: Password verification failed Oct 17 21:29:32 ubuntu-server postfix/smtpd[2453]: warning: my.router[192.168.1.1]: SASL PLAIN authentication failed: authentication failure Oct 17 21:29:34 ubuntu-server postfix/smtpd[2453]: warning: my.router[192.168.1.1]: SASL LOGIN authentication failed: authentication failure EDIT: mabye is problem some port. i foward this ports. POP3 - port 110 IMAP - port 143 SMTP - port 25 HTTP - port 80 Secure SMTP (SSMTP) - port 465 Secure IMAP (IMAP4-SSL) - port 585 StartTLS - port 587 IMAP4 over SSL (IMAPS) - port 993 Secure POP3 (SSL-POP) - port 995 postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = amavis:[127.0.0.1]:10024 delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all inet_protocols = all mailbox_size_limit = 0 maximal_backoff_time = 8000s maximal_queue_lifetime = 7d message_size_limit = 0 minimal_backoff_time = 1000s mydestination = myhostname = mail.xx.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = /etc/mailname readme_directory = no receive_override_options = no_address_mappings recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_data_restrictions = reject_unauth_pipelining smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/private/mail.xx.com.crt smtpd_tls_key_file = /etc/ssl/private/mail.xx.com.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/maps/alias.cf virtual_gid_maps = static:5000 virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/maps/domain.cf virtual_mailbox_limit = 0 virtual_mailbox_maps = mysql:/etc/postfix/maps/user.cf virtual_uid_maps = static:5000 saslfinger -c version: 1.0.4ostfix Cyrus sasl configuration Ä mode: client-side SMTP AUTH -- basics -- Postfix: 2.9.3 System: Ubuntu 12.04.1 LTS \n \l -- smtp is linked to -- libsasl2.so.2 => /usr/lib/i386-linux-gnu/libsasl2.so.2 (0x00d3a000) -- active SMTP AUTH and TLS parameters for smtp -- relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes -- listing of /usr/lib/sasl2 -- total 28 drwxr-xr-x 2 root root 4096 okt 14 15:18 . drwxr-xr-x 72 root root 12288 okt 14 15:03 .. -rw-r--r-- 1 root root 1 maj 4 06:17 berkeley_db.txt -rw-r----- 1 root root 701 okt 14 15:18 saslpasswd.conf -rw-r----- 1 smmta smmsp 885 okt 14 15:18 Sendmail.conf -- listing of /etc/postfix/sasl -- total 12 drwxr-xr-x 2 root root 4096 okt 11 18:55 . drwxr-xr-x 4 root root 4096 okt 12 06:59 .. -rwx------ 1 root root 241 okt 11 18:55 smtpd.conf Cannot find the smtp_sasl_password_maps parameter in main.cf. Client-side SMTP AUTH cannot work without this parameter!

    Read the article

  • Unable to setup ssh tunnel on mac

    - by prashant
    On my office windows XP laptop I use a program called Bitvise Tunnelier to establish ssh tunnel to a in-house MySQL database. In the Tunnelier program I also need to provide address of corporate http proxy server in order to establish tunnel. On my personal mac laptop, I use Cisco Anywhere client to establish a VPN connection to my corporate network. But i'm unable to establish ssh tunnel to mysql database using ssh. How do I specify the proxy server address in the ssh command? As additional info when i'm using office laptop (whether in home or office) I can successfully ping to the server address specified in the Tunnelier program. But i cannot ping the same server using my mac machine (even after connecting via VPN). So basically i'm unable to understand what's going on and what steps i can take to debug this problem .

    Read the article

  • Generating SSL certificates

    - by user73483
    Hi, I was wondering if anyone has any idea in how to generate a signed CA cert and key using openssl? I have found this website (http://dev.mysql.com/doc/refman/5.1/en/secure-create-certs.html) to generate the client and server certs for mysql server but the example is a self-signed certificate. I use the following command for running the server and client using openssl and the generated certs and keys: openssl s_server -accept 6502 -cert server-cert.pem -key server-key.pem -CAfile ca-cert.pem -www openssl s_client -connect 192.168.1.92:6502 -cert client-cert.pem -key client-key.pem -CAfile ca-cert.pem The error output I get is "Verify return code: 18 (self signed certificate)". Paul

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >