Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 47/344 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Good, simple reasons for having multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't fazed because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I can't really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • XSIGO Product Training Now Available

    - by Cinzia Mascanzoni
    Xsigo Sales and Pre-Sales training is available via iLearning for partners registered in OPN Server and Storage Systems Knowledge Zones. The recommended online training sessions provide sales training solutions that equip partners with the product knowledge, market knowledge and selling strategies to help achieve their revenue targets. Partners are invited to learn more here: Sales Training: Product Essentials For Sales - Oracle Virtual Networking Pre-Sales Training and Product Essentials For Sales Consultants - Oracle Virtual Networking.

    Read the article

  • PASS Summit 2011: Save Money Now

    - by Bill Graziano
    Register by March 31st and save $200.  On April 1st we increase the price.  On July 1st we increase it again.  We have regular price bumps all the way through to the Summit.  You can save yourself $200 if you register by Thursday. In two years of marketing for PASS and a year of finance I’ve learned a fair bit about our pricing, why we do this and how you react to it.  Let me help you save some money! Price bumps drive registrations.  We see big spikes in the two weeks prior to a price increase.  Having a deadline with a cost attached is a great motivator to get people to take action. Registering early helps you and it helps PASS.  You get the exact same Summit at a cheaper rate.  PASS gets smoother cash flow and a better idea of how many people to expect.  We also get people that are already registered that will tell their friends about the conference. This tiered pricing lets us serve those that are very price conscious.  They can register early and take advantage of these discounts.  I know there are people that pay for this conference out of their own pockets.  This is a great way for those people to reduce the cost of the conference.  (And remember for next year that our cheapest pricing starts right after the Summit and usually goes up around the first of the year.) We also get big price bumps after we announce the program and the pre-conference sessions.  If you wrote down the 50 or so best known speakers in the SQL Server community I’m guessing we’ll have nearly all of them at the conference.  We did last year.  I expect we will this year too.  We’re going to have good sessions.  Why wait?  Register today. If you want to attend a pre-conference session you can always add it to your registration later.  Pre-con prices don’t change.  It’s very easy to update your registration and add a pre-conference session later. I want as many people as possible to attend the Summit.  It’s been a great experience for me and I hope it will be for you.  And if you are going to go, do yourself a favor and save some money.  Register today!

    Read the article

  • Taking a Chomp out of a (Social Network) Product Hype

    - by kellsey.ruppel
    Andrew Kershaw, Senior Director Oracle Social Network Product Development, speaks about Oracle Social Network One of our competitors is being very aggressive with its own developed Social Network add-on, but there should be no doubt in the minds that the Oracle social capabilities available with Fusion CRM stack up well against it. Within the Oracle Cloud, we have announced a product called Oracle Social Network. That technology is pre-integrated into Fusion Applications, enabling your customer to build a collaborative and social enterprise (without all the noise!). Oracle Social Network is designed together with our Fusion Applications. It is very conveniently pre-integrated with CRM, HCM, Financials, Projects, Supply Chain, and the Fusion family. But what's even better is that the individual teams can take a considered approach to what they are trying to achieve within the collaboration process and the outcome they are trying to enable. Then they can utilize the network and collaboration tools to support that result. And there's more! The Fusion teams can design social interactions that bridge across and outside their individual product lines because we have more than just a product line and they know they have the social network to connect them. I know we have a superior product, but it is our ability to understand and execute across the enterprise that will enable us to deliver a much more robust and capable platform in the short term than our competitor can. We have built a product specifically designed for enterprise social collaboration which is not the same for the competition. We have delivered a much more effective solution - one in which individuals can easily collaborate to get results, while being confident that they know who has access to their information. Our platform has been pre-built to cross the company boundaries and enable our customers to collaborate, not just with their customers, but with their partners and suppliers as well. So Fusion addresses the combination of the enterprise application suite with enterprise collaboration and social networking. Oracle Social Network already has a feature function advantage over our competitor's tool providing a real added value to the employees. Plus Oracle has the ability to execute in a broad enterprise and cross-enterprise way that our competitors cannot. We have the power of a tool that provides the core social fabric across all of the applications, as well as supporting enterprise collaboration. That allows us to provide intelligent business insight, connections, and recommendations that our competitor simply can't. From our competitors, customers get integration for Sales; they get integration for Service, but then they have to integrate every other enterprise asset that they have by themselves. With Oracle, we are doing the integration. Fusion Applications will be pre-integrated, and over time, all of the applications in the business suite, including our Applications Unlimited and specialist industry applications, will connect to the Oracle Social Network. I'm confident these capabilities make Oracle Social Network the only collaboration platform on which to deliver the social enterprise.

    Read the article

  • Create Sound and Video Presentation For Your Website

    The concept of web video production offers the business community an exciting opportunity to expand their reach. The technological advances in the speeds of internet transmission now make it possible to take the concept of marketing to another level. The availability of high quality video provides an efficient tool for business to reach an expanding customer base. This article will briefly discuss web video production.

    Read the article

  • Seconde preview pour le moteur d'Internet Explorer 9 : le HTML 5 semble être devenu une priorité pou

    Mise à jour du 06/05/10 Seconde preview pour le moteur d'Internet Explorer 9 Mais toujours pas d'interface : le HTML5 semble être devenu une priorité pour Microsoft La seconde pré-version d'Internet Explorer 9 est disponible, ou plus exactement la pré-version de son moteur. L'équipe de développement ne souhaite en effet pas mettre la charrue avant les boeufs et continue donc de proposer de tester le « coeur » du navigateur avant d'en dévoiler l'apparence. Selon le responsable du projet, ce moteur affiche des performances de 20 % supérieures à la version précédente pour la gestion du JavaScript (et 36 % plus rapides que celles de Firefo...

    Read the article

  • We’re looking got SQL People

    - by simonsabin
    We are growing our data team at Wonga. If you are working in the SQL Server space and would like to join the one the fastest growing tech companies in Europe then please get in touch ( http://sqlblogcasts.com/blogs/simons/contact.aspx ) We have positions for production DBAs, Data QA analysts and SQL generalists (with a BI tendency). We also have generalist production support roles   Wonga is currently 3rd in the Times Tech Track 100 having been 1st last year. Being in the top 3 for two years...(read more)

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to fix the "Unable to calculate upgrade" issue when upgrading from 12.04 to 12.10?

    - by Vagrant232
    I've been trying to upgrade to 12.10 ever since it was released today but I keep meeting this error: "An unresolvable problem occurred while calculating the upgrade: E:Unable to correct problems, you have held broken packages. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu" I've tried updating all the currently installed software, removing all the extra PPAs, downgrading the files installed from xorg edgers' ppa but I haven't been able to solve the problem.

    Read the article

  • Load Test Manifesto

    - by jchang
    Load testing used to be a standard part of the software development, but not anymore. Now people express a preference for assessing performance on the production system. There is a lack of confidence that a load test reflects what will actually happen in production. In essence, it has become accepted that the value of load testing is not worth the cost and time, and perhaps whether there is any value at all. The main problem is the load test plan criteria – excessive focus on perceived importance...(read more)

    Read the article

  • Problems with ipsec betwen Cisco ASA 5505 and Juniper ssg5

    - by Oskar Kjellin
    I am trying to set up an ipsec tunnel between our ASA 5505 and a Juniper ssg5. The tunnel is up and running, but I cannot get any data through it. The local network I am on is 172.16.1.0 and the remote is 192.168.70.0. But I cannot ping anything on their netowork. I receive a "Phase 2 OK" when I set up the ipsec. I think this is the part of the config that is applicable. It seems like the data is not routed through the tunnel, but I am not sure... object network our-network subnet 172.16.1.0 255.255.255.0 object network their-network subnet 192.168.70.0 255.255.255.0 access-list outside_cryptomap extended permit ip object our-network object their-network crypto ipsec ikev1 transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer THEIR_IP crypto map outside_map 1 set ikev1 phase1-mode aggressive crypto map outside_map 1 set ikev1 transform-set ESP-3DES-MD5 crypto map outside_map 1 set ikev2 pre-shared-key ***** crypto map outside_map 1 set reverse-route crypto map outside_map interface outside webvpn group-policy GroupPolicy_THEIR_IP internal group-policy GroupPolicy_THEIR_IP attributes vpn-filter value outside_cryptomap ipv6-vpn-filter none vpn-tunnel-protocol ikev1 tunnel-group THEIR_IP type ipsec-l2l tunnel-group THEIR_IP general-attributes default-group-policy GroupPolicy_THEIR_IP tunnel-group THEIR_IP ipsec-attributes ikev1 pre-shared-key ***** ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key *****

    Read the article

  • sudo or acl or setuid/setgid ?

    - by Xavier Maillard
    Hi, for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • cloning a kvm guest os to a vmdk file

    - by Bond
    I have a production environment where I am having 4 Guest OS running on a Ubuntu server which uses kvm. These OS are in an LVM based setup.I want these Virtual Machines to be in a vmdk format also.Where people would do experiments with these Virtual Machines so this in a vmware environment (or it can be Xen too) would be different from the kvm server.I would not have any control on that other environment so I want to give people vmdk images of these virtual machines. The production Virtual Machines will still keep running on kvm server but the VMs on which experiments would be done would be of type vmdk.(vmdk is a constraint) Here is output of lvscan ACTIVE '/dev/abcd/lvm1' [100.00 GiB] inherit ACTIVE '/dev/abcd/lvm2' [150.00 GiB] inherit ACTIVE '/dev/abcd/lvm3' [50.00 GiB] inherit ACTIVE '/dev/abcd/lvm4' [100.00 GiB] inherit I was reading man page of qemu-img and what I understand is I need to first create a qcow image file which I need to populate and then convert that to a vmdk file. Is that understanding correct? Now suppose /dev/abcd/lvm4 is the virtual machine with which I am going to start this experiment.I can shutdown the production VMs for some time to do this. So is the following way correct to go on server 1 (where kvm is running) qemu-img convert -c -f raw -O vmdk /dev/abcd/lvm4 /backup/lvm4.img or it will affect the lvm4 on kvm server 1. I do not want the VM running on original server to at all loose its any of the content but also have a vmdk file for each of the Guest OS on kvm. Before I proceed with any of the above things on the production machine I just want to make sure that I am doing the correct thing so I asking here.

    Read the article

  • Enabling CURL on Ubuntu 11.10

    - by Afsheen Khosravian
    I have installed curl: sudo apt-get install curl libcurl3 libcurl3-dev php5-curl and I have updated my php.ini file to include(I also tried .so): extension=php_curl.dll To test if curl is working I created a file called testCurl.php which contains the following: <?php echo ‘<pre>’; var_dump(curl_version()); echo ‘</pre>’; ?> When I navigate to localhost/testCurl.php I get an error: HTTP Error 500 Heres a snippet from the error log: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/php_curl.dll' - /usr/lib/php5/20090626+lfs/php_curl.dll: cannot op$ PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/sqlite.so' - /usr/lib/php5/20090626+lfs/sqlite.so: cannot open sha$ [Sun Dec 25 12:10:17 2011] [notice] Apache/2.2.20 (Ubuntu) PHP/5.3.6-13ubuntu3.3 with Suhosin-Patch configured -- resuming normal operations [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/` Can anyone help me to get curl working? The problem was with the original test code. I used a new test file containing this and curl is now working: <?php ## Test if cURL is working ## ## SCRIPT BY WWW.WEBUNE.COM (please do not remove)## echo '<pre>'; var_dump(curl_version()); echo '</pre>'; ?>

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • Firebox 1250e Core Failing?

    - by Noah
    We have 2 Firebox 1250e Core firewall boxes in our production environment, serving as an active and passive mode. A few months back, the active box was flashing a warning light, so our consultant removed it, and plugged it in to a test network. Everything appeared to be working fine, so he reloaded it into the production environment, and we didn't see any other issues. Fast forward to last week, and out network was constantly dropping connections over RDC, timing out, and performing as if there was a traffic issue. I turned off the production box and everything began to work fine immediately. At this point though, I'm not sure how to proceed. Should the box be completely replaced? Is there any recommended testing we could do to determine if there is a failure of some type with this device? Should we try upgrading the software on it? I know the environment isn't the issue, since the passive box (which is now the active one) is working fine. We'd like to have 2 in production though for safety failover purposes. I am not a network admin, but am hoping someone here might be able to provide some guidance.

    Read the article

  • Puppet&Hiera: $variable is not an hash or array when accessing it

    - by txworking
    I wrote a puppet module and the content of init.pp was: class install( $common_instanceconfig = hiera_hash('common_instanceconfig'), $common_instances = hiera('common_instances') ) { define instances { common { $title: name => $title, path => $common_instanceconfig[$title]['path'], version => $common_instanceconfig[$title]['version'], files => $common_instanceconfig[$title]['files'], pre => $common_instanceconfig[$title]['pre'], after => $common_instanceconfig[$title]['after'], properties => $common_instanceconfig[$title]['properties'], require => $common_instanceconfig[$title]['require'] , } } instances {$common_instances:} } And the hieradata file was: classes: - install common_instances: - common_instance_1 - common_instance_2 common_instanceconfig: common_instance_1 path : '/opt/common_instance_1' version : 1.0 files : software-1.bin pre : pre_install.sh after : after_install.sh properties: "properties" common_instance_2: path : '/opt/common_instance_2' version : 2.0 files : software-2.bin pre : pre_install.sh after : after_install.sh properties: "properties" I always got a error message When puppet agent run Error: common_instanceconfig String is not an hash or array when accessing it with common_instance_1 at /etc/puppet/modules/install/manifests/init.pp:16 on node puppet.agent1.tmp It seems $common_instances can be got correctly, but $commono_instanceconfig always be treated as a string. I used YAML.load_file to load the hieradata file, and got a correct hash object. Can anybody help?

    Read the article

  • What's needed in a complete ASP.NET environment?

    - by Christian W
    We have a ASP3.0 application with a few ASP.NET (2.0) dittys mixed in. (Our longtime goal is to migrate everything to ASP.NET but that's not important for this issue) Our current test/deploy workflow is like this: 1 Use notepad++ or VS2008 to fix a bug/feature (depending on what I have open) 2 Open my virtual test-server 3 Copy the fixed file over, either with explorer, or if I can be bothered to open it, WinMerge 4 Test that the fix works 5 Close the virtual test-server 6 Connect to our host with VPN 7 Use WinMerge to update the files necessary 8 Pray to higher powers that the production environment is not so different that something bombs. To make things worse, only I have access to my "test-server". So I'm the only one testing it. I really want to make this a bit more robust, I even have a subversion setup running. But I always forget to commit changes... And I don't even work in my checked out folder, but a copy of what is currently in production... Can someone recommend some good reading on deploying, testing, staging and stuff like that. I currently use VS2008 and want to use subversion or GIT (or any other free VCS). Since I'm the only developer, teamsystem is not really an option (cost-related). I have found myself developing an "improved" feature, only to find a bug in the same feature in the production system. And since my "improved" feature incorporated deleting some old functionality, I have to fix bugs directly in production... That's not a fun feeling... (I have inherited this system recently... So it's not directly my fault that it is like this ;) )

    Read the article

  • Solr startup script problem

    - by Camran
    I have installed solr and it works finally... I have now problems setting it up to start automatically with a start command. I have followed a tutorial and created a file called solr in the /etc/init.d/solr dir... Here is that file: #!/bin/sh -e # SOLR auto-start # # description: auto-starts solr engine # processname: solr-production # pidfile: /var/run/solr-production.pid NAME="solr" PIDFILE="/var/run/solr-production.pid" LOG_FILE="/var/log/solr-production.log" SOLR_DIR="/etc/jetty" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -jar start.jar" JAVA="/usr/bin/java" start() { echo -n "Starting $NAME... " if [ -f $PIDFILE ]; then echo "is already running!" else cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & sleep 2 echo `ps -ef | grep -v grep | grep java | awk '{print $2}'` > $PIDFILE echo "(Done)" fi return 0 } stop() { echo -n "Stopping $NAME... " if [ -f $PIDFILE ]; then cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop sleep 2 rm $PIDFILE echo "(Done)" else echo "can not stop, it is not running!" fi return 0 } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 5 start ;; *) echo "Usage: $0 (start | stop | restart)" exit 1 ;; esac Whenever I do solr -start I get this error: "Error occurred during initialization of VM Could not reserve enough space for object heap" I think this is because of the file above... Also here is where I have solr installed: var/www/solr and here is the start.jar file located: var/www/start.jar Help me out if you know whats causing this. Thanks BTW: OS is ubuntu 9.10

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • Cisco access-list confusion

    - by LonelyLonelyNetworkN00b
    I'm having troubles implementing access-lists on my asa 5510 (8.2) in a way that makes sense for me. I have one access-list for every interface i have on the device. The access-lists are added to the interface via the access-group command. let's say I have these access-lists access-group WAN_access_in in interface WAN access-group INTERNAL_access_in in interface INTERNAL access-group Production_access_in in interface PRODUCTION WAN has security level 0, Internal Security level 100, Production has security level 50. What i want to do is have an easy way to poke holes from Production to Internal. This seams to be pretty easy, but then the whole notion of security levels doesn't seam to matter any more. I then can't exit out the WAN interface. I would need to add an ANY ANY access-list, which in turn opens access completely for the INTERNAL net. I could solve this by issuing explicit DENY ACEs for my internal net, but that sounds like quite the hassle. How is this done in practice? In iptables i would use a logic of something like this. If source equals production-subnet and outgoing interface equals WAN. ACCEPT.

    Read the article

  • sudo or acl or setuid/setgid?

    - by Xavier Maillard
    for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • How do I set up postfix to store e-mail in a file instead of relaying it?

    - by GomoX
    I want to run a staging copy of a production server on a local environment. The system runs a PHP application, which sends e-mail to customers in various scenarios and I want to make sure no e-mail is ever sent from the staging environment. I can tweak the code so it uses a dummy e-mail sender, but i'd like to run the exact same code as the production environment. I can use a different MTA (Postfix is just what we use in production), but I'd like something that is easy to set up under Debian/Ubuntu :) So, I'd like to set up the local Postfix install to store all e-mail in (one or more) files instead of relaying it. Actually, I don't really care how it's stored as long as it's feasible to check the e-mail that was sent. Even a set up option that tells postfix to keep the e-mail in the mail queue would work (I can purge the queue when I reload the staging server with a copy from production). I know this is possible, I just haven't found any good solution online for what seems like a fairly common need. Thanks!

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >