Search Results

Search found 7564 results on 303 pages for 'job scheduling'.

Page 238/303 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Hostname error on my Slicehost Ubuntu server

    - by allesklar
    Like many folks who upgraded to Rails 2.2, I got an exception raised when sending an email. This version of Rails or later does require using tls for sending emails. The message in the production log file says: hostname was not match with the server certificate I did a whole lot of research and work on this and did everything I could. I changed my slice's hostname to ohlalaweb.com. If I run the command 'hostname' at the CL I get: ohlalaweb.com Postfix seems to work fine. I can send emails from the CL to my gmail, yahoo, and google apps gmail accounts with no problems. Here is the result of cat /etc/postfix/main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ohlalaweb.pem smtpd_tls_key_file=/etc/ssl/certs/ohlalaweb.pem smtpd_use_tls=yes # SA created next line to force postfix to use self create certificate smtpd_tls_auth_only=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = ohlalaweb.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I have regenerated the ssl keys with the ohlalaweb.com host name. Any ideas or suggestions?

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • Apache2, Tomcat6, and proxy redirects

    - by Randal Hale
    So here is my question - go easy and slow. I'm a GIS Consultant and general hack with linux. I inherited this volunteer job essentially because I knew more than the rest of the team - or the rest of the team isn't as stubborn as I am... With that said a number of people have been mucking around in the server before I got involved so I've been cleaning up a lot of things. The domain names have been changed to protect the innocent. I have a server running Apache2 (port 80) and tomcat6 (8080) running on ubuntu server 10.4. There is a virtual host on Apache2 called "Runner" (the domain is runner.org). I have mod_proxy loaded. I am trying to redirect everyone that visits runner.org to http://some.ip.address:8080/openrunner-webapp/ So far I've gotten runner.org assigned to the apache2 server. Someone set up a redirect in the httpd.conf file but I believe it needs to go into the virtualhost. I tried setting the redirect in the virtualhost as: *ProxyPass / http://localhost:8080/openrunner-webapp All that does is show me the root of the Apache webserver. Anyway I'm stuck

    Read the article

  • Scheduled task does not run on WIndows 2003 server on VMWare unattened, runs fine otherwise

    - by lnm
    Scheduled task does not run on Windows 2003 server on VMWare. The same setup runs fine on standalone server. Test below explains the problem. We really need to run a more complex bat file, but this shows the issue. I have a bat file that copies a file from server A to server B. I use full path name, no drive mapping. Runs fine on server B from command prompt. I created a task that runs this bat file under a domain id with password that is part of administrator group on both servers. Task runs fine from Scheduled task screen, and as a scheduled task as long as somebody is logged into the server. If nobody is logged in, the task does not run. There is no error message in Task Scheduler log, just an entry that the task started, bit no entry for finish or an error code. To add insult to injury, if the task copies a file in the opposite direction, from server B to server A, it runs fine as a scheduled unattended task. If I copy a file from server B to server B, the task also runs fine unattended, I recreated exactly the same setup on a standalone server. No issues at all. I checked obvious things like the task has "run only as logged in" unchecked, domain id has run as a batch job privilege and logon rights, Task Scheduler service runs as a local system, automatic start. Any suggestions?

    Read the article

  • 0 connected nodes in datastax opscenter

    - by gansbrest
    Installed opscenterd on the separate node outside of the cluster, but within firewall ( aws security group ). Tested all possible ports between agents and opcenter server. No errors in the log.. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Attempting to load all persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted scheduled job descriptions 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: OpsCenter starting up. 2013-10-30 01:07:23+0000 [] INFO: Finished starting new cluster services for FC_Cluster 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.34.10.185 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.32.37.251 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.82.226.252 is version u'3.2.2' The most interesting part that I can see some data in the opscenter UI, when I stop agents, there is no data displayed, when I start - it show up again, but at the same time it shows 0 connected nodes. Storage capacity is even funnier - 3 of 0 nodes.. Any ideas why that could be happening?

    Read the article

  • Dell Inspiron 1564 overheating but fan not switching on, how to diagnose?

    - by Smugrik
    I've got a Dell Inspiron 1564 laptop that is about one and a half years old. Since about a week, the laptop started to overheat, causing it to switch off unexpectedly... The cpu fan is working erratically, it can start to spin for a while, doing its job and cooling down the cpu before it stops, but then the temperature goes up, and the fan doesn't reacts, once the temperature reaches a critical point (over 85 celsius, checked with speedfan...), the laptop switches off... I already cleaned the vents and fan from dust, to no avail, and it was actually quite clean anyway. Drivers and bios are up-to-date, no crapware was ever installed on this machine. I don't know how to diagnose the problem, could it be the temperature sensors that sends wrong information, so the fan doesn't reacts? but then I believe the computer wouldn't detect the overheat and stop... Is there a way I can pin point the problem? Maybe some low-level diagnostic tools to check functionality of sensors and fans??? The warranty is already over so any suggestion would be welcome. Thanks!!

    Read the article

  • Mac OS X printing to CUPS - More intuitive authentication failure?

    - by Moduspwnens
    We have a network-wide CUPS server that offers authenticated printer access to all our campus users. We've been pretty disappointed with the way Mac clients handle bad printing authentication, though. In any other authentication dialog, when a user types in a bad username or password, the window shakes briefly, allowing the user to re-enter. With printers, this isn't the case. It'll happily accept (and even save to the keychain, if specified) bad credentials. The authentication dialog is dismissed, and the user then has to deal with the print jobs showing up as "On hold (authentication required)". To get their job printed, they need to select it in the printer's queue, click "Resume", then re-enter appropriate credentials. Is there a way to get failed printing authentication to work more intuitively for Mac OS X clients? We're trying to support a BYOD environment, but our end users have been really confused by this. It's made even worse by the way it pre-populates the user's full login name (e.g. "Smith, John"), which tends to make them think to use their local machine passwords.

    Read the article

  • Configure Windows firewall to prevent an application from listening on a specific port

    - by U-D13
    The issue: there are many applications struggling to listen on port 80 (Skype, Teamviewer et al.), and to many of them that even is not essential (in the sense that you can have a httpd running and blocking the http port, and the other application won't even squeak about being unable to open the port). What makes things worse, some of the apps are... Well, I suppose, that it's okay that the mentally impaired are being integrated in the society by giving them a job to do, but... Programming requires some intellectual effort, in my humble opinion... What I mean is that there is no way to configure the app not to use specific ports (that's what you get for using proprietary software) - you can either add it to windows firewall exceptions (and succumb to undesired port opening behavior) or not (and risk losing most - if not all - of the functionality). Technically, it is not impossible for the firewall to deny an application opening an incoming port even if the application is in the exception list. And if this functionality is built into the Windows firewall somewhere, there should be a way to activate it. So, what I want to know is: whether there exists such an option, and if it does how to activate it.

    Read the article

  • nginx not serving admin static files?

    - by toto_tico
    First, I want to clarify that this error is just for the admin static files. This means my problem is specific just to the static files that corresponds to the Django admin. The rest of the static files are working perfect. Basically my problem is that for some reason I cannot access those admin static files with the ngix server. It works perfect with the micro server of Django and the collect static is doing its job. This means it is putting the files on the expected place in the static folder. The urls are correct but I cannot even access the admin static files directly, but the others I can. So, for example, I am able to access this url (copying it in the browser): myserver.com:8080/static/css/base/base.css but i am not able to access this other url (copying it in the browser): myserver.com:8080/static/admin/css/admin.css I also tried to copy the admin/ directory structure into other_admin_directory_name/. Then I can access myserver.com:8080/static/other_admin_directory_name/css/admin.css Then, it works. So, I checked permissions and everything is fine. I tried to change ADMIN_MEDIA_PREFIX = '/static/admin/' to ADMIN_MEDIA_PREFIX = '/static/other_admin_directory_name/', it doesn't work. This a mistery in itself that I am exploring but still no luck. Finally, and it seems to be an important clue: I tried to copy the admin/ directory structure into admin_and_then_any_suffix/. Then I cannot access myserver.com:8080/static/admin_and_then_any_suffix//css/admin.css So, if the name of the directory starts with admin (for example administration or admin2) it doesn't work. * added thanks to sarnold observation ** the problem seems to be in the nginx configuration file /etc/nginx/sites-available/mysite location /static/admin { alias /home/vl3/.virtualenvs/vl3/lib/python2.7/site-packages/django/contrib/admin/media/; }

    Read the article

  • schedule backup and restore of SSAS 2008 database

    - by Manjot
    Hi, I can backup and restore databases on Microsoft SQL server Analysis Service 2008 using GUI as from Backup SSAS I want to schedule backup and restore it to another server every night. so what i did is : I scripted out the backup and restore process from the GUI. Created a new SQL server agent job in database engine and added a "Run SSAS query" step. Copied the scripts to this step. But it fails. the scripts that the GUI copied out look like: <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <Object> <DatabaseID>DB</DatabaseID> </Object> <File>C:\Backup\DB.abf</File> <AllowOverwrite>true</AllowOverwrite> </Backup> <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <File>\\server\C$\Backup\DB.abf</File> <DatabaseName>DB</DatabaseName> <AllowOverwrite>true</AllowOverwrite> </Restore> Any help please?

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS. [migrated]

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • How to transfer files via infrared on Linux?

    - by arielnmz
    I know this is a way too old technology but I've got some files inside a very old cellphone that I need to transfer to a very old computer. So far my Infrared USB device works well, it's detected by the machine (lsusb output): Bus 002 Device 002: ID 0df7:0620 Mobile Action Technology, Inc. MA-620 Infrared Adapter I've tried to send the file over MMS and even email (it lacks bluetooth, not to mention USB). But this cellphones's firmware doesn't let me attach the files. The file was originally transfered via IrDA, and it only has an internal memory (a whole 2 million bytes! whoa!). I found a package called irda-utils, but it seems that there are only two executables: irdaping and irdadump. I think the dump utility might do the job (which as far as I can see it's kind of a version of tcpdump but for IrDA), but I don't even know how to process the received frames. Could this question may be what I'm looking for? EDIT While reading through the Linux Infrared HOWTO I found about the OpenObex project, which may be what I'm looking for...

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • Huge or minimal performance hit running game servers on a Virtual Machine? [closed]

    - by Damainman
    I have a two dedicated servers to choose from depending on which one would do a better job. I plan on updating the Hard Drive space and RAM at a later date depending on how I move forward. Server 1: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon L5420(Quad Core) @ 2.50Ghz Server2: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon E5420(Quad Core) @ 2.50GHz I want to run a virtual machine that will host about 10 game servers, with about 16 active slots per server. It will be a mix and match from: Minecraft Counter Strike( 1.6, Source, Global Offensive) Battlefield Team Fortress I know the general consensus is virtualization is a horrible idea if you plan on running virtual servers on them. The issue is, the discussions I read do not really clearly state whether they are speaking about a virtual server running inside an OS(ie: VMware Player running on Windows with the game server in a VM) or a Hypervisor such as Xen Cloud Platform. I am trying to get a definite answer on how feasible the above would be and how much of a performance hit it might be if the VM running the game servers is on a hypervisor such as Xen Cloud Platform. My initial research lead me to believe that there wouldn't be a performance hit since the virtualization is different than running it via inside of a OS.

    Read the article

  • Mysql Fail to start

    - by John Naegle
    I'm running a Ubuntu 12.04 LTS Virtual Machine. Last week, the VM stopped unexpectedly now mysql will not start on the VM. These two events may be related, they may not be. When I try to connect: $ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Then: $ sudo service mysql start start: Job failed to start And $ dmesg [ 1838.218400] type=1400 audit(1374633238.253:50): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18473 comm="apparmor_parser" [ 1838.358656] init: mysql main process (18477) terminated with status 1 [ 1838.358695] init: mysql main process ended, respawning [ 1839.269303] init: mysql post-start process (18478) terminated with status 1 And $ service mysql status mysql stop/waiting I think this means mysql is crashing when it starts: $ sudo mysqld start 130723 21:51:24 InnoDB: Assertion failure in thread 3064211200 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.boffset >= FIL_PAGE_DATA InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 02:51:24 UTC - mysqld got signal 6 ; Per the manual, I went to the data directory (/var/lib/mysql) and ran this: myisamchk --silent --force */*.MYI Then: $ sudo mysqld ... InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. ... Is my database corrupt? What can I do to recover? Re-install mysql? Something less drastic? I'm fine with losing the database, I just want a working system.

    Read the article

  • How to compress .pdfs in word 2007?

    - by chobo2
    Hi I am trying to send my cover letter and resume away but apparently it is too big to send through craigs list(my computer says the total size is 500kb) as it has a 600kb limit(so small should be at least a meg). Hi there. You recently tried to email Some job Email, an anonymous craigslist address. However, your message was too big to be sent through our system. Craigslist has a 600KB limit on the messages we'll send. Please reduce the size of your mail and try again. Thanks for using craigslist. So when I convert my word 2007(.docx) files to pdf they become huge. Like they got from 32kb to 320kb. So is there a way I can either get around craigslist limits or compress my pdfs a bit to make it happy. I don't want to send zips and stuff since the person who gets it might not even know what to do. I rather not send .docx since not sure if will have office 2007 or the compatibility view installed and I rather just send it as pdf(as some place require it anyways to be in pdfs). Thanks

    Read the article

  • Inter-VLAN Malicious Code Scanning

    - by Jackthedog
    I am trying to find an inbuilt solution on a Cisco Catayst 3750X Switch to scan all traffic routed from one VLAN to another for malicious code. The situation is that we currently have a development environment which is currently being redesigned to upgrade the network infrastructure to use the 3750X switches to manage server and workstation connectivity as well as inter-VLAN routing. We also have another system that is responsible for taking the builds created on the development environment and imaging various HDDs. Because these are two separate systems, we have a requirement in the workplace to anti-virus scan any data transferred between these systems. This is done by copying the data from the originating system to external USB HDD, scanning in a standalone workstation and then copying the data on to the receiving system. As you can imagine this is extremely tedious and impractical most of the time... (I don't make the rules). Anyway, with this redesign going on, we would like to join the imaging system to the network infrastructure of the development system, keeping separation by the use of VLANs and restricting traffic by using ACLs. As we still have the requirement to scan all traffic I would like to configure some sort of malicious code scanning when ever traffic is routed between these VLANs. I am aware I could install a separate in-line IPS/IDS device, however both systems will be using multiple ports on the switch (obviously), and we won't be able to put a device on each port. I would would prefer not to add additional hardware if the 3750x switch is capable of doing the job. Is anyone aware of any Cisco solution that I could use here, that ideally can be incorporated into the 3750x switch? Thanks in advance.

    Read the article

  • Microsoft Web Farm Framework 502 error

    - by hermiod
    My apologies if this is a really stupid question, but I have installed and set up the new Microsoft Web Farm Framework in a UAT environment with a controller machine, a primary, and a secondary server. I have installed one of our in-house applications on to the primary server and it has been successfully provisioned across to the secondary. I have tried accessing the server via the url http:\mis-uat-002\testfarm\reviewer where mis-uat-002 is the name of the farm controller, testfarm is the name of the farm, and reviewer is the name of the application. When using this url I get a "502 - Web server received an invalid response while acting as a gateway or proxy server." error. If I access the application directly via http:\mis-uat-003\reviewer (mis-uat-003 is the primary farm server, reviewer is the application) then it works no problem. Is there some additional set up that needs to be done in order for the farm controller to do its job? It should be noted that this is a UAT environment so is not on a domain of any sort. However, if this was successful and the WFF came out with V1.0 fairly soon, we would be looking to run this on a Windows 2008 domain.

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • Cisco Multi-DMZ firewall

    - by BParker
    I need to find a firewall that will give me 1 LAN port, and 5-7 DMZ ports. I have a requirement to replace some FreeBSD systems that are used to run some testing equipment. It is essential that the DMZ ports cannot communicate with each other, but the LAN port can communicate with everyone. That way a user on the LAN can connect to the test systems, but the test systems are isolated entirely and cannot interfere with each other. One of the DMZ's will be connected to a VMWare ESXi server, one to a standard server, and the rest to various types of equipment. The lan port will be connected to the corporate LAN switch. Sorry if i am a little vague, I am just trying to work all this out myself! Currently we have a FreeBSD configured, but the quad port NIC's are pretty expensive, and the PC itself is old, so i would prefer to replace it with a dedicate piece of kit which can do the same job, but more reliably! These test rigs are used all over the place, and get moved quite often, so i am aiming for Cisco kit for ease of configuration and reliability of the hardware itself. Thanks

    Read the article

  • Software to clean up photos of whiteboards and documents?

    - by Norman Ramsey
    I take a lot of photos of whiteboards, blackboards, and so on for teaching purposes (examples online through May 2010). I'm interested in cleaning them up for archival purposes, preferably using Linux. Commercial products ClearBoard and PhotoNote are priced a little aggressively for my purposes, plus my students would like to have this capability too. Does anyone know of any good, open source software for Converting photographs to images with just a few colors? Eliminating perspective distortion? Removing unwanted junk from around the edges of an image? or anything like that? I'm imagining that I start out with a picture of my whiteboard using red and black markers, and I end up with a three-color image using just white, red, and black. Or I photograph a laser-printed document and end up with a clean black-and-white image. I have tried standard tools that reduce the number of colors in an image, and they do a terrible job—probably because they are trying to reproduce the uneven illumination of the original image. Command-line Linux tools would be ideal.

    Read the article

  • How to let MSN or Yahoo Messenger set you to be Invisible or Offline when you are idle for an hour?

    - by Jian Lin
    The short question is: How do we let MSN or Yahoo Messenger set us to be Invisible or Offline when we are idle for half an hour or an hour? The reason is: if I am on 24 hours a day, some people see me as weird. Some people see my value as low, because I am always there. There are ways to set me to "Away" or "Busy" after 10 minutes, but there seems to be no way to set myself to invisible or offline after 1 hour. As I am a software developer, I am very used to turning the computer on 24 hours a day. (for example, for checking email for urgent fixes, and fix issue and push to web server). We don't turn off computer usually even when we sleep, because we may sometimes can't sleep yet and come check on the computer, or wake up in the morning and immediately need to check if everything is ok. But, my MSN and Yahoo Messenger is always on for 24 hours a day, and I found that some girls start to ask me why I am always there 24 hours a day (even though they see me as away or busy, their feeling is that I am always there). What's more, I found that since I am always there, my value actually drop in their eyes, because hard to get = high value, and always there = low value. Some people feel me as having nothing much to do, always in front of computer, or what is he doing in front of computer so much? Now since it is my job, and I need to read emails once in a while, I am in fact in front of the computer more than some other people. I am in front of the computer maybe 10 hours a day, far from 24 hours a day. Is there an easy and automatic solution to this?

    Read the article

  • Parallel Environment (PE) on Sun Grid Engine (6.2u5) won't run jobs: "only offers 0 slots"

    - by Peter Van Heusden
    I have Sun Grid Engine set up (version 6.2u5) on a Ubuntu 10.10 server with 8 cores. In order to be able to reserve multiple slots, I have a parallel environment (PE) set up like this: pe_name serial slots 999 user_lists NONE xuser_lists NONE start_proc_args /bin/true stop_proc_args /bin/true allocation_rule $pe_slots control_slaves FALSE job_is_first_task TRUE urgency_slots min accounting_summary FALSE This is associated with the all.q on the server in question (let's call the server A). However, when I submit a job that uses 4 threads with e.g. qsub -q all.q@A -pe serial 4 mycmd.sh, it never gets scheduled, and I get the following reasoning from qstat: cannot run in PE "serial" because it only offers 0 slots Why is SGE saying "serial" only offers 0 slots, since there are 8 slots available on the server I specified (server A)? The queue in question is configured thus (server names changed): qname all.q hostlist @allhosts seq_no 0 load_thresholds np_load_avg=1.75 suspend_thresholds NONE nsuspend 1 suspend_interval 00:05:00 priority 0 min_cpu_interval 00:05:00 processors UNDEFINED qtype BATCH INTERACTIVE ckpt_list NONE pe_list make orte serial rerun FALSE slots 1,[D=32],[C=8], \ [B=30],[A=8] tmpdir /tmp shell /bin/sh prolog NONE epilog NONE shell_start_mode posix_compliant starter_method NONE suspend_method NONE resume_method NONE terminate_method NONE notify 00:00:60 owner_list NONE user_lists NONE xuser_lists NONE subordinate_list NONE complex_values NONE projects NONE xprojects NONE calendar NONE initial_state default s_rt INFINITY h_rt 08:00:00 s_cpu INFINITY h_cpu INFINITY s_fsize INFINITY h_fsize INFINITY s_data INFINITY h_data INFINITY s_stack INFINITY h_stack INFINITY s_core INFINITY h_core INFINITY s_rss INFINITY h_rss INFINITY s_vmem INFINITY h_vmem INFINITY,[A=30g], \ [B=5g]

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >