Search Results

Search found 10921 results on 437 pages for 'latex environment'.

Page 106/437 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • How to make MAMP PRO / XAMPP secure enough to serve as production webserver? Is it possible?

    - by Andrei
    Hi, my task is to setup a MAMP webserver for our website in the easiest way so it can be managed by my colleagues without experience in server administration. MAMP PRO is an excellent solution, but some guys don't suggest to use it for serving external requests. Could you explain why it is bad (in details if possible) and how to make it secure enough to be a full-scale and not-only-local webserver? Is there a better solution? Update There is a discussion on the MAMP website. XAMPP developers say that one can make their product secure: The default configuration is not good from a securtiy point of view and it's not secure enough for a production environment - please don't use XAMPP in such environment. Since LAMPP 0.9.5 you can make your XAMPP installation secure by calling »/opt/lampp/lampp security«. Could you comment it?

    Read the article

  • Cross-submission robots.txt for multiple domains on single host

    - by sidd.darko
    We are running a site with multiple languages hosted in a single environment on IIS7. For example, oursite.com - english oursite.de - german oursite.es - spanish This is a single-host environment. All of these sites are in the same application space on the same physical machine. I need to do cross-submission of sitemaps via robots.txt. Looking at the sitemap.org guidelines for this suggest this is possible, but the example indicates different physical machines. Will the following entries in oursite.com/robots.txt work? http://www.oursite.com/sitemap-oursite-de.xml http://www.oursite.com/sitemap-oursite-es.xml

    Read the article

  • I need to duplicate software packages installed on one HPUX system to a test system

    - by oneodd1
    I am very inexperienced with HPUX and need to duplicate a production server for a test environment. I have 11.11 on the prod server and have completed the base install on the test server. What I need is a way to add the installed packages from the prod server to the test server. Unfortunately I haven't the foggiest idea how to do this. I've thought of using ignite backup and restore but don't have matching tape types between the two. The other thought was using swlist to gather the installed packages and then going to the website to download and install on the test environment. Has anyone successfully done something like this? Pointers?

    Read the article

  • Missing "log on to" field in Windows 2008 password change dialog

    - by Dmitri Chubarov
    I have to setup an embedded standalone CIFS server outside of domain environment. I shall refer to the server as storage_gateway. The default password setting for the Administrator account is Password must be changed. Until Administrator password is changed other local accounts are disabled. While the server is on default password connecting to a share causes fails with NT_STATUS_PASSWORD_MUST_CHANGE error. $ smbclient -L //storage_gateway -U Administrator session setup failed: NT_STATUS_PASSWORD_MUST_CHANGE I have a mixed Linux/Windows environment and prefer smbclient for diagnostics. To change the password in Windows 2003 Server I could press CTRL-ALT-DEL to get into the Windows Security window and change password for the remote server specifying the server name or ip address (xx.xx.xx.xx) in the system password change dialog. (the name root is just an example, I have edited the dialogs to translate them) However in Windows 2008 the corresponding dialog lacks the Log on to field. Is there a way to change a remote password in Windows 2008 Server/Windows 7?

    Read the article

  • Scripting a Windows 2008 Cluster from Windows 2003

    - by glancep
    Our current environment is all Windows 2003. When we migrate a new version of our service to the cluster, we first stop the service with a command like: cluster.exe <clusterName> resource "<serviceName>" /offline We do similarly after the migrate to bring the service back online. Now, we are upgrading our environment to new Windows 2008 servers. However, our build/migrate machine will remain Windows 2003. When issuing the same command from Windwos 2003 to Windows 2008, we get: System error 1722 has occurred (0x000006ba). The RPC server is unavailable. We need to be able to remotely administer a Windows 2008 cluster from a Windows 2003 server in an automated fashion (such as the command-line cluster.exe utility). Is this possible? Thanks, Gideon

    Read the article

  • Windows Deployment Services

    - by timbrigham
    I have a slightly advanced Windows Deployment Services setup. My router hands out DHCP addresses, including the following config. ip dhcp pool Servers_100 network 192.168.100.0 255.255.255.0 bootfile boot\\x86\\pxelinux.0 next-server 192.168.100.50 default-router 192.168.100.1 dns-server 192.168.100.80 192.168.100.81 This works perfectly for other subnets - I have a couple screens in my pxelinux that allow me to select my various Linux installers or enter the windows preboot environment. For some reason I'm only receiving the default bootfile that opens to the windows preboot environment. Any idea why?

    Read the article

  • How can I determine a cmd.exe's parent process.

    - by René Nyffenegger
    Sometimes I find myself in a cmd.exe environment that itself was started by another cmd.exe or by another console-based application. Now, working in such an environment, I'd like to know what happens if I type exit, that is, if the cmd.exe window will disappear, or if it goes back to the creating cmd.exe or application. This, of course, because sometimes as I work in cmd.exe I am forgetful about how I called it. So, is there a way to find out the parent process (if this is the correct terminus for what I mean) of a cmd.exe within cmd.exe?

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • Nested RDP and ILO/VMWare console sessions, latency and keystroke repetition

    - by ewwhite
    I'm working on a remote server installation entirely through ILO. Due to the software application and environment, my access is restricted to a Windows server that I must access through RDP. Going from that system to the target server is accomplished via HP ILO2 or ILO3. I'm trying to run a CentOS installation in an environment where I can't use a kickstart. I'm doing this via text mode, but the keystrokes are repeating randomly and it's difficult to select the proper installation options. For example: ks=http://all.yourbase.org/kickstart/ks.cfg ends up looking like: ks====httttttp://allll..yourbaseee.....org/kicksstart/ks.cccfg I'm doing this using Microsoft's native RDP client (on Mac and Windows). I've also noticed this before when running installations or doing remote work in nested sessions. Same for typing into a VMWare console in some cases. Is there a nice fix for this, or it it simply a function of the protocol(s)?

    Read the article

  • Windows Server firewall asking for advice

    - by George2
    Hello everyone, I have Windows Server 2003/2008 machine, and I deployed some application on this machine. I want to put this machine in a sandbox environment, which means I want this machine to be able to access only proxy/gateway, its private used SQL Server database server, and I want to avoid network access from this machine to other machines in lab server room. Any easy solutions? BTW: my current environment is, I have a server which runs some Beta software in a Lab server room. It connects internet through proxy/gateway. Since the software is Beta, I want to reduce the risk of being hacked from internet and controlled by hacking sofwtare to attack my other servers in the same Lab server. thanks in advance, George

    Read the article

  • How to build the rpm package with SHA-256 checksum for files?

    - by larrycai
    In standard alone RHEL 6.4 rpm build environment, the rpm packages is generated with SHA-256 check sum, which is gotten by command rpm -qp --dump xxx.rpm [user@redhat64 abc]$ rpm -qp --dump package/rpm/abc-1.0.1-1.x86_64.rpm .. /opt/company/abc/abc/1.0.1-1/bin/start.sh 507 1398338016 d8820685b6446ee36a85cc1f7387d14537d6f8bf5ce4c5a4ccd2f70e9066c859 0100750 user abcc 0 .. While if it is build in docker environment (still RHEL6.4) the checksum is md5 UPDATE Use Ubuntu 14.04 as docker server, Redhat6.4 is the container inside [user@c1cbdf51d189 abc]$ rpm -qp --dump package/rpm/abc-1.0.1-1.x86_64.rpm .. /opt/company/abc/abc/1.0.1-1/bin/start.sh 507 1401952578 f229759944ba77c3c8ba2982c55bbe70 0100750 user abcc 0 .. If I checked the real file, the file is the same [user@c1cbdf51d189 1.0.1-1]$ sha256sum bin/start.sh d8820685b6446ee36a85cc1f7387d14537d6f8bf5ce4c5a4ccd2f70e9066c859 bin/start.sh [user@c1cbdf51d189 1.0.1-1]$ md5sum bin/start.sh f229759944ba77c3c8ba2982c55bbe70 bin/start.sh How I configure rpmbuild to let generated rpm file is SHA-256 based ?

    Read the article

  • Changing Windows public folder in batch script.

    - by Angus
    I've installed Steam on an external hard drive so I can play games on different computers by just moving the drive around. Since save games are often saved in My Documents or AppData, but I want them to move with the external hard drive, I wrote a batch file that sets environment variables before starting steam. setocal set USERPROFILE=%EXTERNAL_LETTER%\Profile\Me set APPDATA=%USERPROFILE%\AppData\Roaming ... start %TARGETAPP% endlocal I'm not sure if this is the right way to do this on Windows, but it seems to work. However, one game saves its games in the Shared Documents folder. I've tried setting %PUBLIC% and %ALLUSERSPROFILE% but that does not seem to affect where the game looks. Is it possible to make this one program use a different Shared Documents folder, either by environment variables or some other means? The change in Shared Documents should only affect the one program, I do not want it to be a permanent or system wide change to Windows.

    Read the article

  • How to trigger cross-platform jobs between Windows and Unix

    - by andyb
    I have an application which has components on Windows and Unix. I need to run overnight jobs which initiate tasks/jobs on both environment. These need to be sequenced so I cannot simple use Cron / Task Scheduler. For each job, there will be a controlling script either on Windows or Unix, but this script will need to initiate jobs on the other environment and detect when they are complete and a success/failure code. In the past I have achieved this using 'flag files' on a samba share. This worked but required 'polling' behaviour on the receiving end which I do not consider optimal. I would prefer not to have to embed user credentials in script if possible.

    Read the article

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

  • NFS failover WITHOUT DRBD?

    - by user439407
    So I am trying to set up a redundant NFS share in a cloud environment(all links internal, half gig links), and I am looking into using heartbeat for failover, but all the guides seem to be about combining DRBD and heartbeat to create a robust environment. If need be I can do that, but since my content is almost completely static, I would like to avoid the extra overhead and complexity of DRBD if possible, but still be able to fail over if one of the NFS servers fails. Is it possible to use heartbeat with NFS to achieve high-availability without using DRBD to copy the blocks? I am not married to NFSv4, so if NFSv3 over UDP is necessary, that won't be a problem(only a very small number of clients will be connecting to the share) Any comments are appreciated.

    Read the article

  • Exchange 2003 resource scheduling with mixed client versions

    - by Daniel Lucas
    We run Exchange 2003, but have a mix of Outlook 2003/2007/2010 in the environment. We have three rooms that need to be configured as resources. Some observations we've made with resource scheduling/booking are: Outlook 2010 users have trouble with the native Exchange 2003 resource scheduling method and require direct booking to be configured via registry Outlook 2007 users are unable to use direct booking (is this accurate?) Outlook 2003 users can only use the native Exchange 2003 resource scheduling method (is this accurate?) Direct booking cannot be combined with the auto-accept agent What is the correct way to setup resource scheduling in a mixed environment like this? Thanks, Daniel

    Read the article

  • Managing Active Directory Group Membership with a Non-Administrator Account In Server 2008

    - by Laranostz
    I am running Server 2008 R2 in an Active Directory Domain Environment. I have created a group in Active Directory and I have delegated management authority to that group to a user. I want this user to be able to add and remove accounts as needed from that group so that they are exercising some measurement of control without giving them other authority. When I have the user attempt to access the Active Directory Users & Computers Console it prompts them for Administrator credentials. They are using Remote Desktop to access the server, because they do not have Windows 7, and firewall rules prevent using the Remote Management Kit. I do not want to provide them with any level of Administrative rights except the minimum required for them to add/remove users from this group. There are two servers that 'talk' to each other in this isolated environment, a domain controller and a member server, both are only reachable through RDP. Any suggestions?

    Read the article

  • Ideas for SVN/SQL/PHP/Linux Dev Enviroment Supporting Multiple Isolated Environments?

    - by jpganz18
    I am trying to create a "dev" for my users. In that environment they would access to their own account of PHPMyAdmin, SQL, Subversion and FTP which is not a big problem, but I would like to emulate like if each one would be in their own server. I mean so that they could change the PHP configuration (for example) and would be done only in its own environment. Any idea how to do this? Do I have to make something "special" at the installations of my server or something like that?

    Read the article

  • exception while creating initial context

    - by Harish
    in thread "main" java.lang.NoClassDefFoundError: weblogic/kernel/KernelStatus at weblogic.jndi.Environment.<clinit>(Environment.java:78) at weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117) at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667) at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288) I am getting this exception when I try to create a initial context to hit the weblogic server. This is the code I am trying from eclipse.I have added weblogic.jar and wlclient.jar in the classpath. Hashtable env = new Hashtable(); // WebLogic Server 10.x connection details env.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory" ); env.put(Context.PROVIDER_URL, "t3://localhost:7001"); env.put(Context.SECURITY_PRINCIPAL, "xxxxx"); env.put(Context.SECURITY_CREDENTIALS, "******"); return new InitialContext( env Has anyone faced this issue,How to resolve it?

    Read the article

  • How do I install a newer version of GTK in Ubuntu without replacing the current one?

    - by William Friesen
    I am trying to compile file-roller from git, but running autogen.sh gives me this error configure: error: Package requirements (gtk+-3.0 >= 2.91.1) were not met: No package 'gtk+-3.0' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables GTK_CFLAGS and GTK_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. I am running Ubuntu Maverick and don't wish to completely replace my current version of gtk, glib, etc. I have tried to compile GTK using the --prefix argument of autogen.sh, but this gives me a similar error about my version of glib. How can I successfully compile file-roller using these new libraries without borking my install?

    Read the article

  • Repercussions of Raising Domain Functional Level to 2008 on Mac computers running 10.6.2 with OD

    - by JohnyV
    We have recently replaced all of our 2003 server domain controllers to 2008 r2 and have tried to implement PSO's but have found that the domain functional level must be raised to 2008. We have a mac server in our environment that runs open directory and it is integrated into AD. Does anyone know if I do raise the domain functional level (which makes sense since we only have 2008 r2 domain controllers) what the repercussions (if any) there will be on the macs in the environment? Macs are running 10.6.2 and mac server runs the same. Mac server is running OD and also bound to AD.

    Read the article

  • open-sshd service withou pam support !! How can I add pam support to sshd? Ubuntu

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password

    Read the article

  • Best Practices: How can admin deploy software to 100s of PC ?

    - by Gopal
    Hi ... The Environment: I am working for a college. We have a couple of labs (about 100 PCs) for students. At the end of the semester, the PCs will be full of viruses, corrupt system files, all sorts of illegal downloads etc. (everything you can expect from a student environment). At the end of the semester, we would like to wipe out all the systems and do a clean install (WindowsXP + a set of application suites) to get ready for the next batch of students. Question: Is there any free software that will enable an admin to deploy a clean disk image to all the PCs in one go?

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >