Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 259/344 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • can't change wallpaper real time regedit commands

    - by itagomo
    i'm a first time 'SuperUser' poster .. what i want is to programatically change Desktop Wallpaper every few hours .. i'm using a batch file (.bat) and don't want to use other languages or programs, just the pre-installed stuffs with Windows XP .. i've already made my script that will modify values in the Registry reg add "HKCU\Control Panel\Desktop" /v Wallpaper /d "C:\Pictures\picture1.jpg" but it's not taking effect real time even with this command: RUNDLL32.EXE USER32.DLL,UpdatePerUserSystemParameters ,1 ,True need to reboot first to take effect. if i'm going to use Display Properties, it'll show at once. what i've noticed is that changes will take effect real time if it's a .bmp file and not for .jpg images. Second option is to convert JPG to 24-bit BMP files (to look exactly the same, but will triple the file size), but i'm hoping a better way .. i've already googled things but no avail.. i hope you (the helpful reader) can post any .bat or even vbs script to change Desktop Wallpaper instantly with JPG pictures .. hoping there's an answer without installing other apps or scripts ..

    Read the article

  • Remote Desktop svchost (networkservice) & lsa.exe high cpu usage, hangs on welcome screen

    - by Rohan1
    We have deployed an RDS Farm with 12 virtual RDS servers using Hyper V. Currently some users are not able to log on. After passing credentials to the connection broker, the session hangs on the "Welcome" screen. Using resource monitor we've seen that svchost (with the "networkservice" service) has a CPU usage of 50%, when viewing the wait chain on the process it displays that it's waiting for a lsa.exe to finish. We can't kill any of the users processes, even when trying with taskkill /f. Suspending lsa.exe did work but didn't have any effect. The networkservice also couldn't be restarted. Also, if this happens, the current users logged on to the RDS server can't be displayed. Task manager crashes when viewing the users, RDS service manager crashes when viewing the users (even remotely) and the cmd command "query session" doesn't work. No antivirus is installed on the RDS server. The only thing we can do is rebooting the server, which is not an option because of the fact that other users are in active sessions. Does anyone have ANY idea what's going on? We didn't encounter this in our pre-production setup.

    Read the article

  • Why host and vmware guests fail to get MAC of each other?

    - by Georgiy Nemtsov
    I have Windows 7 64bit host running VmWare Workstation 8 with two CentOS 6.3 guests. All guests adaptors are bridged with statically assigned ip's. Connectivity bitween host and guests was fine for many days running this setup. And today while I was working suddenly host and guests became unreachable for each other. While both host and guests could connect to internet and connect to other mashines in my networks. On guests arp -a showed for host ip address: ? (192.168.1.3) <incomlete> on eth0 On host arp -a showed for guests ip 192.168.1.19 00-00-00-00-00-00 192.168.1.20 00-00-00-00-00-00 All other arp records was OK. Deleting arp-caches didn't help neither on guests nor on host. After that I disabled and reenabled network adaptor on my Windows 7 host. And the problem was gone. arp -a now shows correct MACs on all instances. As I suppose the issue was about expiry of arp cache. For some reason host and guests couldn't get their MACs. Hope somebody knows what it is all about? I am preparing guests to work in production and don't want to face such problems in future! Also I was supprised while investigating this issue from another mashine that could connect both on guests and host. On that mashine arp -a showed same MAC for host and two guests.

    Read the article

  • Server periodically freezing - Help Stabilizing

    - by JonDog
    We run an asp.net/sql server data collection website with a hand full of clients dumping data in and running reports. We moved to a new server (specs below) and have had issues with it freezing and having to reboot it a dozen times over the pass six months. The hosting company has mentioned possible causes (listed below) but cant give a definite answer on what is going wrong. They have offered to reconfigure how ever I like. We have benefited from having a much faster system and really dont want to get rid of the ssd's unless they are the issue. Two possible setup changes that I've talked with them about are also listed below. Any suggestions on what maybe causing the freezing issue as well as suggestion on a new setup would be great. My main questions are: Do SSD generally have problems running the OS & SQL Server on the same RAID Array? and Are the new SSD's still unrefined enough to be running in a production environment? Thanks Current: Xeon Quad Core E3-1270 3.40 Ghz 16 GB DDR3-1333 ECC SDRAM First Hard Drive: 120GB Intel SSD Second Hard Drive: 120GB Intel SSD Third Hard Drive: 120GB Intel SSD Fourth Hard Drive: 120GB Intel SSD SAS 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit MSSQL 2008 Web Edition Possible Causes: Running Sql Server & OS on same RAID Array OS Software Issues Using SSD's CPU Underpowered Not enough RAM Option 1 2x Xeon Quad Core E5-2603 1.80 GHz 16 GB DDR3-1333 ECC SDRAM 1 x 240GB Intel SSD - OS 3 x 1 TB SATA HDD (7200 RPM) - SQL Server SATA 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit Option 2 Dell PowerEdge E3-1270v2 3.5GHz 4 Cores 16 GB DDR3-1600 UDIMM 4 x 128 GB Samsung 840 Pro SSD Add-in H200 (SAS/SATA Controller), 4 Hard Drives - RAID 10 Windows 2012 Standard Edition - 64 Bit

    Read the article

  • What character can be safely used for naming files on unix/linux?

    - by Eric DANNIELOU
    Before yesterday, I used only lower case letters, numbers, dot (.) and underscore(_) for directories and file naming. Today I would like to start using more special characters. Which ones are safe (by safe I mean I will never have any problem)? ps : I can't believe this question hasn't been asked already on this site, but I've searched for the word "naming" and read canonical questions without success (mosts are about computer names). Edit #1 : (btw, I don't use upper case letters for file names. I don't remember why. But since a few month, I have production problems with upper case letters : Some OS do not support ascii!) Here's what happened yesterday at work : As usual, I had to create a self signed SSL certificate. As usual, I used the name of the website for the files : www2.example.com.key www2.example.com.crt www2.example.com.csr. Then comes the problem : Generate a wildcard self signed certificate. I did that and named the files example.com.key example.com.crt example.com.csr, which is misleading (it's a certificate for *.example.com). I came back home, started putting some stars in apache configuration files filenames and see if it works (on a useless home computer, not even stagging). Stars in file names really scares me : Some coworkers/vendors/... can do some script using rm find xarg that would lead to http://www.ucs.cam.ac.uk/support/unix-support/misc/horror, and already one answer talks about disaster. Edit #2 : Just figured that : does not need to be escaped. Anyone knows why it is not used in file names?

    Read the article

  • VMware ESX Linux Guest Customization

    - by andyh_ky
    Hello, I am interested in deploying several RHEL 4 Update 8 virtual machines for creation of a test environment. Here are the steps I am taking: In off hours, P2V/V2V the production machines and convert them to templates Deploy the virtual machines with a customization specification that changes hostname, IP address I am interested in how these processes are done and if there are any options for further customization. Are the machines brought on the network when they are powered on, before they are reconfigured? Is there a potential IP address conflict? Is there an option to run additional scripts which reside on the guest as a part of the reconfiguration? For example, restoring an Oracle Database. This is an option with Windows guests and sysprep, but I have been unable to locate anything showing a RHEL equivalent. I am dealing with a multi tier application. The main issue I am attempting to mitigate is that the application servers reference database servers by hostname and in tnsnames files. I am interested in scripting the reconfiguration of the application in the deployment so that the app/db servers are pointing to the test environment. I am OK with placing the 'cleanup' script on the source and executing it after the machine has been brought up. I am interested in the automation of the script's execution post clone/boot, as well as if there could be an IP address conflict. (cross posted to VMTN's ESX 4 community)

    Read the article

  • (squid): failed to find or read error text file.

    - by adam
    There is something in our ERR_NO_RELAY that is causing this error to be logged and for the squid process to fail on start up. I can't show you the exact content of the file but I can tell you It has several lines of JavaScript When we remove the JavaScript, the problem goes away. This same file does not cause any issues other 3 instances of squid that we have running internally. All instances of squid came from the same VM images so they should be the same. We are unable to reproduce this issue except on the one box and we are unable to debug more on this box right now because it is running in production. I know these files are interpreted so squid can replace certain values available in the session so it may be that a syntax error caused this issue. That does not explain why we cannot reproduce it on other (virtually the same) images. One difference is that the instance of squid that has the issue was under load when the issue occurred. Any suggestions/insight would be appreciated. thanks!

    Read the article

  • Nagios escalation debugging

    - by Oesor
    I'm having some issues with escalations happening properly and I'm not sure if it's because of my config or because the nagios binary is nonstandard and something may be broken. I've got little experience with nagios, and just want to make sure this is being set appropriately. Should the following config file definition allow the escalations to take over and increment the notification interval as expected? Is there somewhere else in the config files I should be looking at to figure out what's going on? I've enabled debug 32 in the config and it's simply spitting out 'Host notification will NOT be escalated.' for each notification. The configuration does pass the pre flight check with no issues, and reports that it's parsing the three host escalations in the config. # test host definition define host { host_name test alias test address 10.0.0.10 hostgroups test check_interval 0 retry_interval 1 max_check_attempts 2 flap_detection_enabled 0 icon_image windows.png icon_image_alt LOGO - Windows vrml_image windows.png statusmap_image windows.png action_url /info/host/275 check_period 24x7 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master check_command check_host_15!-H $HOSTADDRESS$ -t 3 -w 500.0,80% -c 1000.0,100% parents nagios notifications_enabled 1 notification_interval 3 notification_period 24x7 notification_options u,d,r use host-global } define hostescalation{ host_name test first_notification 3 last_notification 4 notification_interval 10 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master } define hostescalation{ host_name test first_notification 4 last_notification 5 notification_interval 30 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master } define hostescalation{ host_name test first_notification 5 last_notification 0 notification_interval 240 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master }

    Read the article

  • How do I get Tomcat 7 to start up faster in Linux CentOS kernel version 2.6.18?

    - by user1786833
    I am experiencing a problem with slow start up times for Tomcat 7. I have done some testing by tweaking configuration parameters both on Linux CentOS kernel version 2.6.18 and on Windows 7 using this link as my primary guide: http://wiki.apache.org/tomcat/HowTo/FasterStartUp and managed only a modest improvement. The improvements seemed to result when I added metadata-complete="true" attribute to the element of my WEB-INF/web.xml file and when I added the names of almost all the jars we use for our application to the tomcat.util.scan.DefaultJarScanner.jarsToSkip property in conf/catalina.properties file. I've also used this JAVA_OPTS in the setenv.sh file: JAVA_OPTS="$JAVA_OPTS -server -Xms1536m -Xmx1536m -XX:MaxPermSize=256m -XX:NewRatio=2 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true " but actually saw my start up times increase slightly. Our QA and production environments are on Linux CentOS so I'm hoping to get more information on improving Tomcat 7 start up times in that environment. My primary role is java developer and I don't have much system administration experience so I appreciate any input. Thank you for your time and suggestions.

    Read the article

  • SQL Server: Network pauses after installing cheap SATA card: Is there a solution?

    - by samsmith
    At the risk of being assigned to the "bad DBA" club... I did something desperate, and may have to undo it. Problem: After installing a low cost eSATA board, my SQL Server is intermittently unresponsive (seemingly when there is a lot of IO to the eSATA drive). Questions: 1) Is there a solution to the intermittent unresponsiveness that allows me to keep the eSATA in place? 2) Whether or not (1==true): What is a decent, low cost way to add 1-3 TB storage to SQL for non-critical SQL DBs? Detail: Our SAN is full, and expanding it is costly and will take a month. I have a pressing need to add 1-3 TB for some development DBs (e.g. not mission critical; data loss is OK). As a bandaid, I threw a $20 eSATA PCI board in the Dell 1950 server, and attached an external 2TB eSATA drive. This seemed to work fine, but I notice that our production SQL DBs, and even remote desktop, now experience network "pauses" that they never did before (with both SQL client apps and remote desktop throwing "networking problem" errors). This SQL Server has lots of memory, and runs an instance of SQL 2005 (where all line of business apps reside) and an instance SQL 2008 (for development db's). SQL Server RAM has been appropriately configured, and this setup has run great for years. The server is: Dell 1950 Win2003 x64 14GB RAM PERC controller, 2 mirrored hd's internal Dell SAN over gbit ethernet, dual homed 2 PCIx slots (1 used by NIC for SAN, 1 now in use for eSATA board) Thank you for suggestions!

    Read the article

  • How to setup a virtual host in Ubuntu running on Amazon EC2 instance?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • Using Monit to monitor Resque

    - by Alex
    I'm trying to use resque as a job runner for Rails. I've tried this config, and many other ways of demonizing the rescue task (because running rake resque:work leaves the terminal tied to that command). Unfortunately, their example configuration doesn't work for me. Does the configuration look correct? Or is there another way to turn the process into a daemon? Thank you :) check process resque_worker_QUEUE with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_QUEUE.pid start program = "/bin/sh -c 'cd /data/APP_NAME/current; RAILS_ENV=production QUEUE=queue_name VERBOSE=1 nohup rake environment resque:work& > log/resque_worker_QUEUE.log && echo $! > tmp/pids/resque_worker_QUEUE.pid'" as uid deploy and gid deploy stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -s QUIT `cat tmp/pids/resque_worker_QUEUE.pid` && rm -f tmp/pids/resque_worker_QUEUE.pid; exit 0;'" if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?

    Read the article

  • What's wrong with closing applications on Windows Mobile?

    - by balpha
    As far as I can tell, this annoys the crap out of people that do notice and (at max) gives no real benefit to people who don't notice: Why did Microsoft decide to make the "X" on Windows Mobile (or CE before that) not close, but only hide the application, and thus keep cluttering up your memory? WM wants you to go to the Control Panel - Memory and "Do you really want to" shut down the app. Pretty much every WM application I've seen that did not come from Microsoft has a "Quit" menu choice. The number of task managers out there that let you quit programs is larger than the count of emails from African bank managers that want me to take care of some millions of bucks that belonged to a deceased customer of theirs. My new HTC even comes with a close-able (not closeable, though) task manager pre-installed. But still today, Word Mobile just wants to hide, not be closed. I don't want to get a "That's M$hit, get used to it" answer; I really want to know: What in the world is the reason for this decision, and even more, for still sticking with it?

    Read the article

  • Can you see something wrong in my working .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Directories Throwing 404 Errors - Virtual Host Configuration and mod_rewrite

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Multiple SSL certificates on one server

    - by Kyle O'Brien
    We're hosting two websites on our fairly tiny but dedicated production server. Both website require SSL authentication. So, we have virtualhosts set up for both of them. They both reference their own domain.key, domain.crt and domain.intermediate.crt files. Each CSR and certificate file for each site was setup using its own unique information and nothing is shared between them (other than the server itself) However, which ever site's symbolic link (set up in /etc/apache2/sites-enabled) is reference first, is the site who's certificate is referenced even if we're visiting the second site. So for example, assume our companies are Cadbury and Nestle. We set up both sites with their own certificates but we create Cadbury's symbolic link in apache's site-enabled folder first and then Nestle's. You can visit Nestle perfectly fine but if you check the certificate installation, it reference's Cadbury's certificate. We're hosting these websites on a dedicated Ubuntu 12.04.3 LTS server. Both certificates are provided by Thawte.com. I came across a few potential solutions with no degree of success. I'm hoping someone else has a decent solution? Thanks Edit: The only other solution that seems to have provided success to some people is using SNI with Apache. However, the setups here didn't seem to coincide with our setup at all.

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • How do I re-configure grub options

    - by Ash
    I've searched this topic and I can't find an article with a plausable solution, to my problem. I installed windows 7 first, with 100gb of disk space. I created the necessary partitions via windows. Then I installed Ubuntu 12.04 on the remaining 400gb of disk space. During the Ubuntu installation I installed the ubuntu boot loader on /dev/sda3 which Windows (as expected never granted me the option pre-boot for which OS I wanted to boot). So I re-installed Ubuntu on that /dev/sda3 partition, overriding the windows 7 loader. Now when I boot windows 7, it runs GNU Grub, so like an infinite loop. How can I reconfigure grub to say: /dev/sda is the bootloader. /dev/sda2 is Windows. /dev/sda3 is Ubuntu. Re-installing windows and my partitions isn't an option, purchasing software for windows isn't an option (theres a reason I use linux, it's not because it's free, because you don't have to install lots of shit to get stuff working, and over-all it's a robust OS).

    Read the article

  • Software disables itself when the PC is accessed via RDP

    - by blckgrffn
    We have a large, specialty printer that has vendor specific software that enables its use outside of it showing up simply as a printer in the Windows Control Panel. This software recognizes when we RDP into the machine and "disconnects" the PC from the printer within its proprietary control panel. All is well when an application like TeamViewer is used to access the machine. Ostensibly, the application is helping us be safe by "enforcing" that the machine used for the printer is a walk up workstation, or so the support folks informed me. If TeamViewer etc, fixes the issue, then what is the problem? We have many headless workstations in our warehouse attached to a variety of specialty machines, all used via RDP. We want/need to keep access to the machines the same for the sanity of our production staff. The meat of the question - how, specifically, might a machine know that it is being accessed via RDP (terminal services management???) and how might this be defeated without altering an application or driver. Of note, the system being used is a Windows 7 Pro machine hooked to the printer via USB. Thanks! Nat edit Is there any combination of /admin switches, etc. that will possibly fix this? Simply putting /admin did not.

    Read the article

  • Do I need to install mssql before I can use mssql.so with php on unix?

    - by lock
    I didn't install any MSSQL instance on my localhost that runs windows. I just used the xampp package and uncommented the modules used for mssql. The mssql server resides on another Windows Server so I believe I only needed a simple connector module. I hoped that it would be the same for Unix. But whenever I open my site on the unix production server, (i use codeigniter btw) the logs tell me it stops script execution after Database Driver Class Initialized. I am not really familiar on installing apache and friends on unix and I wasn't responsible on how the server was set-up. But it turns out that there is no mssql.so found on the php modules directory so i tried to google for one. While the forums are telling me to just compile the script, I couldn't just do that simply as I have no write access to the server and plus it seems upon installation of php, phpize didn't get installed with it too. Hope someone can shed light to me regarding this. I think its just easier if I can get a mssql.so for PHP 4.4.4

    Read the article

  • Discover intended Foreign Keys from JOINS in scripts

    - by Jason
    I'm inheriting a database that has 400 tables and only 150 foreign key constraints registered. Knowing what I do about the application and looking at the table columns, it's easy to say that there ought to be a lot more. I'm afraid that the current application software will break if I started adding the missing FKs because the developers have probably come to rely on this "freedom", but step one in fixing the problem is to come up with the list of missing FKs so we can evaluate them as a team. To make matters worse, the referencing columns don't share a naming convention. The relationships ARE coded informally into the hundreds of ad-hoc queries and stored procedures, so my hope is to parse these files programmatically looking for JOINS between actual tables (but not table variables, etc). Challenges I foresee in this approach are: newlines, optional aliases and table hints, alias resolution. Any better ideas? (Besides quitting) Are there any pre-built tools that can solve this? I don't think regex can handle this. Do you disagree? SQL Parsers? I tried using Microsoft.SqlServer.Management.SqlParser.Parser but all that is exposed is the lexer - can't get an AST out of it - all that stuff is internal.

    Read the article

  • Allow more websocket connections

    - by Switz
    I want to load balance my node.js (DerbyJS to be specific) application on a basic Linode (512MB ram). It can probably take more than one process running at once. The querys/database does not concern me as I'm not doing anything intensive. The problem at the moment is that it can only handle up to ~40 websocket connections at once. I would love if I could get that number in the few hundred+ range. I anticipate a lot of traffic on launch due to the fact that it's a highly niche community with an engaged audience, but after it should be fine with just ~20-40 connections at once, which it handles perfectly as of now. I don't mind spending a bit of money for a week or two worth of running, but I also don't want to switch production environments. How can I test the process to see how many instances I am able to run on the box? Will increasing the number of processes increase the amount of websockets I can handle, or is that a limitation of the server's network? I have an old Macbook Pro running Linux sitting next to me that has 2GB ram and a 2.8 Dual Core Processor. Could I use this to handle some of the extra load? I could probably load balance with nginx to its IP. I'm on a FiOS home network. If you have any suggestions, I'd really appreciate it. Thanks

    Read the article

  • Deploying and publishing my first asp.net mvc 3 web application

    - by john G
    I want to deploy and publish my first asp.net mvc 3 web application at my client side (the client is a small office with 2-4 employees that need to access the application) , currently i finished developing my web application using the free Microsoft Visual Web Developer 2010 express and the free SQL Server 2008 R2 Express database . So my concerns are :- 1. The free sql express database that i am currently using have a limitation of 10 GB i think So i want to buy SQL server for small business to remove the database limitations. So my questions that i need help with them are:- **1. If i use the sql server for small business then will my web application have other limitations on production i am unaware of ? 2. Will be using SQL server for small business my right choice? baring in my that the system will be used by 2-4 clients only? 3. How much does “approximate ” the sql server database will cost in US dollars? 4. Are there any other software that i need to buy to be able to deploy and publish the application on intranet and the internet ?** Appreciate any help Best Regards

    Read the article

  • Adding a 2008 server to a 2003 Domain with DNS devolution?

    - by mvdwege
    I'm running into a problem adding a 2008 server to our existing 2003 domain, and as I am not a Windows admin, I'm not getting the problem here. Some reading around on Technet seems to indicate that DNS devolution is the issue. Here's the setup: DNS for the entire company is hosted on a Unix server running Bind, including the service records for the Windows domain. Our toplevel is company.local, and functional domains are in subdomains, such as mgt.company.local (our management servers). Our Windows servers live mostly in office.company.local, but some of them live in .mgt.company.local and .customers.company.local. The 2003 servers all succesfully authenticate against company.local as the Windows domain. Their position in the infrastructure is set by setting the primary DNS suffix under the network settings and the computer name dialog. Trying to do the same with a brand new 2008 install throws an error though: "Changing the Primary Domain DNS name of this computer to office.company.local failed [...] The specified server cannot perform the requested operation" I tried googling, but the closest I came was the Technet article on DNS Devolution, and I can't make heads nor tails on how to apply that to my case. Addendum 2012-10-23: The problem is not joining the domain, that works, the problem is that it joins with the wrong name, as .company.local, instead of .office.company.local. So far everything works, but I'm rather afraid to run production like this, because sooner or later something is going to complain about the AD name not matching DNS.

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >