Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 334/618 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Whole Lotta Virtualization Goin' On

    - by rickramsey
    Lately we've published a lot of content about virtualization. Here's a sampling. Podcat: Technology Preview of Transcendent Memory Turns out that in a virtual environment, RAM is the bottleneck. Not because it's slow, it's not, but because each CPU still had to use its own RAM. Which gets expensive. In this podcast, Dan Magenheimer describes how Oracle and the open source community taught the guest kernel in Oracle Linux to share its memory with other CPU's. Transcendent memory will wind up saving large data centers a lot of money. Find out how. Tech Article: How to Use Oracle VM Templates This article describes how to prepare an Oracle VM environment to use Oracle VM Templates, how to obtain a template, and how to deploy the template to your Oracle VM environment. It also describes how to create a virtual machine based on that template and how you can clone the template and change the clone's configuration. Tech Article: How to Set Up a Load Balanced Application Across Two Oracle Solaris Zones Install Apache Tomcat on two Oracle Solaris zones. Connect them across a VPN. And let the Integrated Load Balancer in Oracle Solaris 11 manage traffic. Presto: high(er) availability in a single server. Tech Article: How to Install Oracle RAC on Oracle Solaris Zone Clusters Learn how to implement a multi-tiered database environment that isolates database tiers and administrative domains, while taking advantage of centralized (and simpler) cluster admin. For fans of Jerry Lee Lewis If you're a fan of Jerry Lee Lewis, you might enjoy this video. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Choosing what logwatch is reporting on, on Centos 5.4

    - by florin
    I have two Centos 5.4 servers that I set up within weeks of each other. One is e-mail server (let's label it EM) and the other is a web and ftp server (labeled WF). Logwatch came pre-configured and I have not altered its setup in any way -- but the log messages are quite different between the two: server EM reports ssh status while WF does not. With ntpd, the situation is reversed. I know I could start tweaking some knobs in /etc/logwatch and somesuch, but why are the results from the default configuration so different?

    Read the article

  • Restoring an Ubuntu Server using ZFS RAIDZ for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAIDZ disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAIDZ array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAIZ array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • dhcpd: varying vendor-class-identifier

    - by jessicah
    I'm having trouble selectively sending parameters in response to a DHCP Inform packet using groups (or even without, just using host declarations) for bootp stuff. My configuration file right now looks like: subnet 130.123.131.128 netmask 255.255.255.128 { allow unknown-clients; } host dev-mac-09 { option vendor-class-identifier "example-identifier"; hardware ethernet 10:9a:dd:51:ff:83; } If I put vendor-class-identifier in the global scope, using tcpdump I can see that the client receives the vendor class option successfully. If I take it out, and just keep it in the host scope (or group scope), the client never receives the option. Specifying option dhcp-parameter-request list 60 doesn't help either. I did try using a class definition inside a group, but then it applied even if the host wasn't a part of the group. As an aside, how do I get detailed logging? At least something to indicate what groups and things got used to generate the response to the client.

    Read the article

  • Is 850W SMPS enough for the below Config?

    - by bali208
    We are planing to assemble a file server of the following configuration. 1) Intel S2400SC2 Motherboard. 2) Xeon 2407 Processor x 2 (Dual Processors) 3) 8gb x 8 ECC DDR3 RAM 4) One 2 TB Seagate HDD. 5) 8 cabinet fans fitted for NZXT Switch 810 Cabinet. Our question is is CoolerMaster 850 Watts power supply enough for our machine? Or we have to put 1050 Watts power Supply? Please suggest the best Brand and Wattage of SMPS we have to buy for our system. Thank you.

    Read the article

  • apache virtual host concept and dns

    - by Subhransu
    I want to have around 60 repositories of projects and I want to serve them from a dedicated remote server(ubuntu) with the help of mercurial server so that all my developers will be able to update their changes. I have followed this article in order to do that but stocked in the Apache Configuration Step (section 2.5 2.5.4). I have some following questions: What are the steps I need to follow to make apache to serve /home/hg/repositories/private/hgweb.cgi when I enter dev.example.com/private ? Is my virtual host file is correct or do I need to change anything ? I bought the example.com and how to make it to serve dev.example.com/private. Do I need to add A name(like : subdomain.example.com and then IP of my server) in the cpannel of hosting company? ServerAdmin webmaster@localhost ServerName dev.example.com ScriptAlias /private /home/hg/repositories/private/hgweb.cgi <Directory /home/hg/repositories/private/> Options ExecCGI FollowSymlinks AddHandler cgi-script .cgi DirectoryIndex hgweb.cgi AuthType Basic AuthName "Mercurial repositories" AuthUserFile /home/hg/tools/hgusers Require valid-user </Directory> ErrorLog ${APACHE_LOG_DIR}/dev.example.com_error.log # Possible values include: debug, info, notice, warn, error, cr$ # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/dev.example.com_ssl_access.lo$ SSLEngine on SSLCertificateFile "/etc/apache2/ssl/dev.example.com.crt" SSLCertificateKeyFile "/etc/apache2/ssl/dev.example.com.k$ NOTE: The above is my virtual host file. I have not enabled the site yet and also not changed any host or hostname or httpd.conf file.

    Read the article

  • Can I use adsutil.vbs to remove the 'X—Powered-By' response header from IIS6

    - by Anthony
    Can I use adsutil.vbs to remove the 'X—Powered-By' response header from IIS6 configuration? We have a site that is running IIS6. Migration to IIS7 is a few months away at least. In the mean time, I have noticed that it is serving the header X—Powered-By: ASP.NET on all responses. I want this gone, and I would strongly prefer to script this so that future deployments also ensure that the header is not present. Is there a way to do this using adsutil.vbs?

    Read the article

  • NetBSD as VMware workstation guest: `startx` hangs and maxes all CPUs utilization

    - by Howard Guo
    I am using VMware workstation 8. I have attempted to install and run NetBSD 5.1.2 and 6.0. Installations all went OK and the system was usable until I install a window manager. After installed xfce4, in NetBSD 5.1.2, I could startx and used xfce4 two times, however consecutive startx will hang and max all CPU to 100%. In NetBSD 6.0 RC2, I could not even start xfce4 once, startx hangs and max all CPU to 100%. I have tried to use both vmwlegacy and vmware device drivers, they don't help. I have also tried both 32bit and 64bit NetBSD, they behave in the same way. I also tried to catch the output of startx, however system was already hanging before the output gets flushed. Apparently no one else has encountered these troubles on Google search, did I miss any configuration piece? Any other suggestions please?

    Read the article

  • Have rTorrent to move .torrent files

    - by David Alvares
    I am running rTorrent 0.9.2 and have configured it to move completed torrents to a different folder with this configuration line: system.method.set_key = event.download.finished,move_complete,"d.set_directory=~/done/;execute=mv,-u,$d.get_base_path=,~/done/" This is working fine, but I would like it to also move the .torrent file that it creates (from a magnet link into the session directory) into this done directory with the same name as the torrent and a .torrent extension. I tried adding another cp command, but I can not seem to figure out which variable ($d.get_hash did not work) stores the torrent's hash (which is what the .torrent files are named in the session directory). Is there a way to do this with rTorrent, if so how?

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server 2008 R2 Analysis Services Operations Guide

    - by pinaldave
    SQL Server Analysis Service (SSAS) has been always interesting subject for research. Analysis Services cubes are a very powerful tool in the hands of the business intelligence (BI) developer. They provide an easy way to expose even large data models directly to business users. Microsoft has published very informative white paper on Analysis Services Operations Guide. This white paper is authored by Thomas Kejser, John Sirmon, and Denny Lee. In this guide you will find information on how to test and run Microsoft SQL Server Analysis Services in SQL Server 2005, SQL Server 2008, and SQL Server 2008 R2 in a production environment. The focus of this guide is how you can test, monitor, diagnose, and remove production issues on even the largest scaled cubes. This paper also provides guidance on how to configure the server for best possible performance. It is the goal of this guide to make your operations processes as painless as possible, and to have you run with the best possible performance without any additional development effort to your deployed cubes. In this guide, you will learn how to get the best out of your existing data model by making changes transparent to the data model and by making configuration changes that improve the user experience of the cube. Download SQL Server 2008 R2 Analysis Services Operations Guide Note: Abstract taken white paper. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Can /etc/hosts.deny/allow be overridden?

    - by Tar
    I have security measures put in place to keep unwanted users out of my server. I've changed the SSH port, disabled root login, have a software firewall to block portscans, and have entries in hosts.deny and hosts.allow. I have various services denied to all but another server of mine should my IP change, and two other administrators + my own IP address. My question is, can hosts.deny/allow configuration be overridden so that they can gain access to my server? Does using chroot jail for running things like an IRC server and Teamspeak server prevent people from gaining access to my server and screwing with it?

    Read the article

  • Can't log in using sa account for sql server 2008

    - by tessa
    I installed SQL Server 2008. During the install I set it to mixed mode authentication and set the password for what I assume is the sa account. In the configuration manager I set tcp/ip and named pipes to enabled. When I open SQL Server Management Studio and try to log in - username: sa, password: whatIjustsetintheinstall, it fails with the error: Login failed for user sa. (error 18456). The error in Event Viewer is - Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]. I know the password is right because I just set it. What am I doing wrong here? Is sa not the right user to be logging in with mixed mode? I've been reading through forum after forum but just cannot find anything that works.

    Read the article

  • I can't get router and switches configured properly for my home office network

    - by BernicusMaximus
    Networking Gurus, I recently built a new detached garage, with an office above. As such I had it tied into my existing home ethernet wiring. The ethernet signal is coming into the garage just fine, but I can not get my network configured the way I want because of problems trying to link the various router/switch devices. Please see the following links for the network diagrams: Home Network So basically, I can't my future state to work. I'm not sure if I'm using incompatible switches or what, but I tried the future state with some 4 port switches from best buy and had no luck. I resorted to setting up the Current State so I could operate. What I am looking for is help on how best to get my future state to work. Is this possible with my current configuration, and if not, what should I do? Any help is appreciated. Thanks, Bernie

    Read the article

  • Which PHP accelerator to use with prefork mpm (CentOS and RedHat default httpd settings)

    - by FractalizeR
    Hello. As you know, even if it is possible to start httpd in worker mode under CentOS/RedHat, php in default rpm repo is not thread-safe. And the default configuration for stability is mpm_prefork. So, two questions: Is there PHP accelerator capable of working in mpm_prefork mode (using shm or whatever)? If there is none, what can be done to improve PHP speed on CentOS/RedHat systems (I want to use rpms, preferably from default CentOS repo; building custom PHP from source code is not a good option for me)

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Can't connect to wireless router anymore due to data rate problem

    - by Jay White
    I was playing around with my wireless router, and switched the mode to a fixed mode B. Now< I can no longer assoicate to the AP. Windows does not give any particular error message, but with wireshark I see that the returned error is that the client does not support the necessary data rate. My wireless card is type n, and it is set to mode a/b/g compatible. I tried setting ot to just b, however this made no difference. How can I set the data rate of my card so that I can connect again to my AP? I would prefer not to just reset the device, as there has been some configuration done that would be a pain to redo, and as well I do not have the ISP password handy. Regardless I would like to understand this situation better.

    Read the article

  • Git version control with multiple users

    - by ignatius
    Hello, i am a little bit lost with this issue, let me explain you my problem: I want to setup a git repository, three of four users will contribute, so they need to download the code and shall be able to upload their changes to the server or update their branch with the latest modifications. So, i setup a linux machine, install git, setup the repository, then add the users in order to enable the acces throught ssh. Now my question is, What's next?, the git documentation is a little bit confusing, i.e. when i try from a dummy user account to clone the repository i got: xxx@xxx-desktop:~/Documentos/git/test$ git clone -v ssh://[email protected]/pub.git Initialized empty Git repository in /home/xxx/Documentos/git/test/pub/.git/ [email protected]'s password: fatal: '/pub.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly is that a problem of privileges? need any special configuration? i want to avoid using git-daemon or gitosis, sorry, maybe my question sound silly but git is powerfull but i admit not so user friendly. Thanks Br

    Read the article

  • m23 vs webmin vs landscape vs whatever you can propose, I need software to mantain a bunch of debian

    - by marc.riera
    Hello, I know there is landscape from canonical, but it has some $$ costs. Als there is webmin, and it can be used as a cluster management tool. Also there is m23, probably the most usable and interesting peace of manager software. But, what would you suggest to install and use on following configuration: 1) 100 desktop users, against an AD with quest authentication services installed. (ubunt8.04,9.04,9.10,10.04) 2) 50 servers (debian sid, lenny , ubuntu 8.04 and 10.04) We work on different software, so each group of persons need different configurations, each server has different pourposses, nothing is clusterized. And we have a good enough backup software. So , my objectives are: - easy install (deploy) - good reporting - easy logonscripts for users - easy bootupscripts for servers Thanks all for reading, and more thanks for your time. Marc

    Read the article

  • Disable Memory Modules In BIOS for Testing Purposes (Optimize Nehalem/Gulftown Memory Performance)

    - by Bob
    I recently acquired an HP Z800 with two Intel Xeon X5650 (Gulftown) 6 core processors. The person that configured the system chose 16GB (8 x 2GB DDR3-1333). I'm assuming this person was unaware these processors have 3 memory channels and to optimize memory performance one should choose memory in multiples of three. Based on this information, I have a question: By entering the BIOS, can I disable the bank on each processor that has the single memory module? If so, will this have any adverse effects or behave differently than physically removing the modules? I ask due to the fact that I prefer to store the extra memory in the system if it truly behaves as if the memory is not even there. Also, I see this as an opportunity to test 12GB vs. 16GB to see if there is a noticeable difference. Note: According to http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations?t=anon, the current configuration reduces the overall data transfer speed to 1066 and in addition, the memory bandwidth goes down by about 23%.

    Read the article

  • SQL Server 2012 Memory Manager KB articles

    - by SQLOS Team
    Since the release of SQL Server 2012 with a redesigned memory manager, a steady stream of KB articles have been produced by CSS to provide guidance on the new or changed options, as well as fixes that have been published..   How has memory sizing changed in SQL 2012? 2663912 Memory configuration and sizing considerations in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2663912     Setting "locked pages" to avoid SQL Server memory pages getting swapped has been simplified, particularly for Standard Edition, the details can be found here: 2659143 How to enable the "locked pages" feature in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143   Note the following deprecation (particularly relevant for 32-bit installations): 2644592 The "AWE enabled" SQL Server feature is deprecated - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2644592   Note the following fixes available: 2708594 FIX: Locked page allocations are enabled without any warning after you upgrade to SQL Server 2012 - http://support.microsoft.com/kb/2708594/EN-US 2688697 FIX: Out-of-memory error when you run an instance of SQL Server 2012 on a computer that uses NUMA - http://support.microsoft.com/kb/2688697/EN-US Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Squid not caching

    - by Abhishek Chanda
    I am trying to configure Squid as a caching server. I have a LAN where The webserver (apache) is at 192.168.122.11 squid is at 192.168.122.21 and my client is at 192.168.122.22. The problem is, when I look at Squid's access log, all I see are TCP_MISS messages. It seems Squid is not caching at all. I checked that the cache directory has all proper permissions. What else can go wrong here? Please let me know if I need to post the configuration

    Read the article

  • Hardware Acceleration on Windows Server 2008

    - by user1184598
    Does Windows Server 2008 have hardware acceleration? I tried to use WPF to make over 20,000 drawings. It only cause around 40% CPU in windows 7. However, I run the same program on Windows Server 2008 with the same hardware configuration except that it has a dedicated graphical display card (GT 9500) while Windows 7 has only an onboard display card, it cause over 80% CPU. So, does Windows Server 2008 have hardware acceleration? Or could I make it? And how do I change the hardware acceleration setting? Thanks.

    Read the article

  • 10 Reasons Why Java is the Top Embedded Platform

    - by Roger Brinkley
    With the release of Oracle ME Embedded 3.2 and Oracle Java Embedded Suite, Java is now ready to fully move into the embedded developer space, what many have called the "Internet of Things". Here are 10 reasons why Java is the top embedded platform. 1. Decouples software development from hardware development cycle Development is typically split between both hardware and software in a traditional design flow . This leads to complicated co-design and requires prototype hardware to be built. This parallel and interdependent hardware / software design process typically leads to two or more re-development phases. With Embedded Java, all specific work is carried out in software, with the (processor) hardware implementation fully decoupled. This with eliminate or at least reduces the need for re-spins of software or hardware and the original development efforts can be carried forward directly into product development and validation. 2. Development and testing can be done (mostly) using standard desktop systems through emulation Because the software and hardware are decoupled it now becomes easier to test the software long before it reaches the hardware through hardware emulation. Emulation is the ability of a program in an electronic device to imitate another program or device. In the past Java tools like the Java ME SDK and the SunSPOTs Solarium provided developers with emulation for a complete set of mobile telelphones and SunSpots. This often included network interaction or in the case of SunSPOTs radio communication. What emulation does is speed up the development cycle by refining the software development process without the need of hardware. The software is fixed, redefined, and refactored without the timely expense of hardware testing. With tools like the Java ME 3.2 SDK, Embedded Java applications can be be quickly developed on Windows based platforms. In the end of course developers should do a full set of testing on the hardware as incompatibilities between emulators and hardware will exist, but the amount of time to do this should be significantly reduced. 3. Highly productive language, APIs, runtime, and tools mean quick time to market Charles Nutter probably said it best in twitter blog when he tweeted, "Every time I see a piece of C code I need to port, my heart dies a little. Then I port it to 1/4 as much Java, and feel better." The Java environment is a very complex combination of a Java Virtual Machine, the Java Language, and it's robust APIs. Combine that with the Java ME SDK for small devices or just Netbeans for the larger devices and you have a development environment where development time is reduced significantly meaning the product can be shipped sooner. Of course this is assuming that the engineers don't get slap happy adding new features given the extra time they'll have.  4. Create high-performance, portable, secure, robust, cross-platform applications easily The latest JIT compilers for the Oracle JVM approach the speed of C/C++ code, and in some memory allocation intensive circumstances, exceed it. And specifically for the embedded devices both ME Embedded and SE Embedded have been optimized for the smaller footprints.  In portability Java uses Bytecode to make the language platform independent. This creates a write once run anywhere environment that allows you to develop on one platform and execute on others and avoids a platform vendor lock in. For security, Java achieves protection by confining a Java program to a Java execution environment and not allowing it to access other parts of computer.  In variety of systems the program must execute reliably to be robust. Finally, Oracle Java ME Embedded is a cross-industry and cross-platform product optimized in release version 3.2 for chipsets based on the ARM architectures. Similarly Oracle Java SE Embedded works on a variety of ARM V5, V6, and V7, X86 and Power Architecture Linux. 5. Java isolates your apps from language and platform variations (e.g. C/C++, kernel, libc differences) This has been a key factor in Java from day one. Developers write to Java and don't have to worry about underlying differences in the platform variations. Those platform variations are being managed by the JVM. Gone are the C/C++ problems like memory corruptions, stack overflows, and other such bugs which are extremely difficult to isolate. Of course this doesn't imply that you won't be able to get away from native code completely. There could be some situations where you have to write native code in either assembler or C/C++. But those instances should be limited. 6. Most popular embedded processors supported allowing design flexibility Java SE Embedded is now available on ARM V5, V6, and V7 along with Linux on X86 and Power Architecture platforms. Java ME Embedded is available on system based on ARM architecture SOCs with low memory footprints and a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN). 7. Support for key embedded features (low footprint, power mgmt., low latency, etc) All embedded devices by there very nature are constrained in some way. Economics may dictate a device with a less RAM and ROM. The CPU needs can dictate a less powerful device. Power consumption is another major resource in some embedded devices as connecting to consistent power source not always desirable or possible. For others they have to constantly on. Often many of these systems are headless (in the embedded space it's almost always Halloween).  For memory resources ,Java ME Embedded can run in environment as low as 130KB RAM/350KB ROM for a minimal, customized configuration up to 700KB RAM/1500KB ROM for the full, standard configuration. Java SE Embedded is designed for environments starting at 32MB RAM/39MB  ROM. Key functionality of embedded devices such as auto-start and recovery, flexible networking are fully supported. And while Java SE Embedded has been optimized for mid-range to high-end embedded systems, Java ME Embedded is a Java runtime stack optimized for small embedded systems. It provides a robust and flexible application platform with dedicated embedded functionality for always-on, headless (no graphics/UI), and connected devices. 8. Leverage huge Java developer ecosystem (expertise, existing code) There are over 9 million developers in world that work on Java, and while not all of them work on embedded systems, their wealth of expertise in developing applications is immense. In short, getting a java developer to work on a embedded system is pretty easy, you probably have a java developer living in your subdivsion.  Then of course there is the wealth of existing code. The Java Embedded Community on Java.net is central gathering place for embedded Java developers. Conferences like Embedded Java @ JavaOne and the a variety of hardware vendor conferences like Freescale Technlogy Forums offer an excellent opportunity for those interested in embedded systems. 9. Easily create end-to-end solutions integrated with Java back-end services In the "Internet of Things" things aren't on an island doing an single task. For instance and embedded drink dispenser doesn't just dispense a beverage, but could collect money from a credit card and also send information about current sales. Similarly, an embedded house power monitoring system doesn't just manage the power usage in a house, but can also send that data back to the power company. In both cases it isn't about the individual thing, but monitoring a collection of  things. How much power did your block, subdivsion, area of town, town, county, state, nation, world use? How many Dr Peppers were purchased from thing1, thing2, thingN? The point is that all this information can be collected and transferred securely  (and believe me that is key issue that Java fully supports) to back end services for further analysis. And what better back in service exists than a Java back in service. It's interesting to note that on larger embedded platforms that support the Java Embedded Suite some of the analysis might be done on the embedded device itself as JES has a glassfish server and Java Database as part of the installation. The result is an end to end Java solution. 10. Solutions from constrained devices to server-class systems Just take a look at some of the embedded Java systems that have already been developed and you'll see a vast range of solutions. Livescribe pen, Kindle, each and every Blu-Ray player, Cisco's Advanced VOIP phone, KronosInTouch smart time clock, EnergyICT smart metering, EDF's automated meter management, Ricoh Printers, and Stanford's automated car  are just a few of the list of embedded Java implementation that continues to grow. Conclusion Now if your a Java Developer you probably look at some of the 10 reasons and say "duh", but for the embedded developers this is should be an eye opening list. And with the release of ME Embedded 3.2 and the Java Embedded Suite the embedded developers life is now a whole lot easier. For the Java developer your employment opportunities are about to increase. For both it's a great time to start developing Java for the "Internet of Things".

    Read the article

  • Which IMAP flags are reliably supported across most mail servers?

    - by Ben Butler-Cole
    I am writing an application which reacts to emails sent to a mailbox. It retrieves the emails via IMAP. It will be deployed to a number of systems where I do not control the mail server configuration. I would like to use IMAP flags to indicate which messages have been handled. Are the system flags sufficiently widely supported that I can reasonably depend on them in my application? Are user-defined flags sufficiently widely supported? (If the answer is "ha ha, not a chance", then I shall use folders instead.) Thanks -Ben

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >