Search Results

Search found 22829 results on 914 pages for 'nautilus script'.

Page 665/914 | < Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >

  • Is there any way to remotely configure a Microsoft Lync account?

    - by John O
    There are no perl modules for Lync. No open source clients. Windows Powershell can do some things with it, but only on the server on which the server software is installed. It would be useful to be able to forward a certain desk phone number (we use Lync for voip) to a personal cell phone. We can do this from our own desktop machines, but only using the Lync client. It would be nice to be able to have a cron script run that just did rotations, I wouldn't have to carry around the lousy on-call phone with me. communicator.exe doesn't take any useful parameters. Nor are there any obvious function names in the DLLs that would let me just use rundll32.exe to accomplish this. There is a Lync SDK, but no examples of changing phone forwarding, and my Windows 7 machine refuses to install the Silverlight SDK dependency for some reason I can't fathom. Does anyone have any other ideas how I might accomplish this?

    Read the article

  • NETSH : Set default ip address for an interface with multiple Ips

    - by elarichi.y
    To test a load balancer I need to switch my ip address several time a day, and keep other ips routing trough other Wans. I run these commands in a batch script: netsh interface ip set address "Connexion au réseau local" static %ipd% 255.255.255.0 192.168.1.1 1 netsh in ip add address "Connexion au réseau local" %ips1% 255.255.255.0 netsh in ip add address "Connexion au réseau local" %ips2% 255.255.255.0 ipd: is the default ip I want to set (all traffic should go trough it). ips1 and ips2 : are the secondary ips I want to keep but what ever I do all traffic goes trough one IP !! (first one in the range) Please help me with this issue.

    Read the article

  • How can I prevent frame breaking in Chrome from Google image searches, etc.?

    - by Nick T
    More often than not, websites with any number of images will use frame breaking scripts to lose Google Image Search's results frame (e.g. this relatively benign case). While I somewhat understand the reasons for doing so (as ineloquently put forth by these people), more often than not, such breakout/redirects dump me to a useless page that doesn't have the image I was looking for, plus it makes going "back" rather irritating as you need to click twice or more (some pages jam you through several redirects it seems) in rapid succession. Other than having reflexes to copy the 'Full-size image' hyperlink quicker than loading the breakout script, is there a way to get my actual result?

    Read the article

  • 500 internal server error php long running process

    - by Sabirul Mostofa
    I am trying to run a long php process and it ends with the 500 internal server error. It executes fine for about 8 mins. I have rebooted the machine after changing the php settings. PHP Config: max_execution_time: 3600 After around 10 mins ps ax|grep php: 19007 ? S 0:08 /usr/bin/php /home/gypsy/public_html/index.php I have set the ignore_user_abort true. The process gets stuck at 00:08 min and isn't executed further. Apache error log shows the error: Script timed out before returning headers: index.php It seems somehow the max_execution_time isn't working. Any suggestion would be a great help.

    Read the article

  • Pass User Data to AWS client

    - by bearrito
    Has anyone successful passed user data to the AWS CLI ? I have tried various incantations of the following but it does not work. Docs say string must be base64 encoded : http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html The instance logs never indicate the script is executed and chef is installed. aws ec2 run-instances --image-id ami-a73264ce --count 1 --instance-type t1.micro --key-name scrubbed --iam-instance-profile Arn=arn:aws:iam::scrubbed:instance-profile/scrubbed --user-data $(base64 chef_user_data.sh --wrap=0) chef_user_data.sh #!/bin/bash curl -L https://www.opscode.com/chef/install.sh | sudo bash

    Read the article

  • How to get the PID of a process started by /bin/su -c

    - by crash3k
    I'm writing a init.d-script for an java-app. But the java-app should be run by another user. (The OS I'm using is Debian Squeeze.) I already got this: /bin/su - $USER - c "cd $PATH;echo $PASSWORD | $JAVA -Xmx256m -jar $PATH/app.jar -d > /dev/null" & PID=$! /bin/su - $USER - c "echo $PID > $PIDFILE" But this will of course only save the pid of the "/bin/su"-process instead of the pid of the created java-process.

    Read the article

  • Can't boot after compiling 3.1 kernel, can only get to terminal

    - by olssy
    Long story short: I tried compiling kernel 3.1 on Ubuntu 11.10 at the same time I had an update waiting for a reboot. Computer would boot to a black screen and would hang there. Ended up installing 11.10 on top of old install with a Live CD. Now I had a purple screen on bootup but it would end up booting. Realized Grub was the problem and tried some stuff but nothing worked. I ended up trying to install propriety ATI video drivers and since that nothing has worked, no grub menu(purple screen) and when it boots into the kernel it ends up hanging, I can sometimes get a terminal up with alt-fx. I have tried removing the ati drivers with the ati script, purging my fglrx driver, reconfiguring my xconf.org and following any tutorial I can find about fixing a broken graphics driver, but to no avail. I've gotten to a point were it seems that the ati propriety drivers are correctly loaded but it still has no grub boot menu and won't boot into Ubuntu. I've chased down my logs and this line is from kern.log: unity-greeter[3269]: segfault at 0 ip b7245cbbsp bf9d3900 error 4 in libgio-2.0.so.0.3000.0[b71ad000+142000] That line leads me to believe I don'T have the correct libgio-2 librairy on my system but have no idea how to find out what package has the correct version... My xorg.conf has no errors and seems to imply the fglrxdrm module got loaded correctly. Would be a bit complicated pasting the whole file here but if it would help I'll post it. LAstly, running fglrxinfo give me: Error: Unable yo open display (null) Any help or link to another tutorial would be appreciated. Thanks.

    Read the article

  • Can't remove software - Installed in NULL

    - by ChosSimbaOne
    I've installed software to our administration machine. The problem is that i cannot start the software or uninstall it, as it is not in any directory on the machine. I tried to install it at /pack/CST/... but it is not there and a locate on CST or cst returns nothing. The software is installed from a DVD and not a repository. I've tried to reboot the machine, as i thought i might had something to do with the software being loaded in some sort of tmpfs but that didn't help. I've looked through the entire /etc to check for any relations to the software, but unsuccessfully. I'm out of ideas, to what can cause this problem, anyone got any ideas?? EDIT: I downloaded the iso wich i mounted with: sudo mount -o loop /path/to/iso.iso /path/to/mountpoint sudo /path/to/mountpoint/install.sh Ran the install GUI via an X-session. I choose to install the software in /pack/CST/... but when it exited it said that the software had been installed to /tmp/... There was nothing in tmp, so i decided to reboot the machine and did a full find to see if there was anything left of the software, removed what looked like it could be related. It had placed a script in all the /etc/rs* folder which I removed with: sudo update-rc.d -f scriptname -r I rebooted the machine again, just to be sure. When i run the installer again, it tells me that the software is installed in NULL and i have to remove before installing it. /pack/ is a mountpoint for /q/system/pack What i expected was that the software would be installed in /pack/CST, but it seems to be lock in the system, but I am unable to locate where.

    Read the article

  • Stop moving page title in Firefox?

    - by lilydjwg
    Some websites I'm using always make the web title move or blink when, for example, a new mail has just arrived. It is really disturbing as it's not an emergency and usually I'm busy browsing other pages. I can't stand it any longer, but I have to use those services. I want to stop this kind of behavior either by installing a new extension or writing a Greasemonkey script. But I can't find one and have no idea about how to detect and stop the JavaScript code. What is the solution?

    Read the article

  • can't run binaries or shell scripts

    - by hyperboreean
    I am running Debian testing and I am not able to run any binary or shell script. I keep getting "No such file or directory". The umask is the default one and I haven't fooled around with the paths. Also, I am aware of this question, but it doesn't work out for me - I compiled my code on this machine and trying to run it on the same machine. Also, all of my shell scripts have the correct shebang. Any advices?

    Read the article

  • linux + match only VALID IP from text file into other file

    - by yael
    please advice how to match only the valid IPs ( 255.255.255.255 ) from the file.txt and insert only the valid IP into VALID_IP.txt file ( see VALID_IP.txt for example ) the solution should be implemented in my ksh script ( so perl or sed or awk is fine also ) more file.txt e32)5.500.5.5*kjcdr ##@$1.1.1.1+++jmjh 1.1.1.1333 33331.1.1.1 @5.5.5.?????? ~3de.ede5.5.5.5 1.1.1.13444r54 192.9.30.174 &&^#%5.5.5.5 :5.5.5.5@%%^^&* :5.5.5.5: **22.22.22.22 172.78.0.1()*5.4.3.277 example of VALID_IP.txt file 1.1.1.1 192.9.30.174 5.5.5.5 5.5.5.5 5.5.5.5 22.22.22.22 172.78.0.1

    Read the article

  • PHP Code (modules) included via MySQL database, good idea?

    - by ionFish
    The main script includes "modules" which add functionality to it. Each module is set up like this: <?php //data collection stuff //(...) approx 80 lines of code //end data collection $var1 = 'some data'; $var2 = 'more data'; $var3 = 'other data'; ?> Each module has the same exact variables, just the data collection is different. I was wondering if it's a reasonable idea to store the module data in MySQL like this: [database] |_modules |_name |_function (the raw PHP data from above) |_description |_author |_update-url |_version |_enabled ...and then include the PHP-data from the database and execute it? Something like, a tab-navigation system at the top of the page for each module name, then inside each of those tabs the page content would function by parsing the database-stored code of the module from the function section. The purpose would be to save code space (fewer lines), allow for easy updates, and include/exclude modules based on the enabled option. This is how many other web-apps work, some of my own too. But never had I thought about this so deeply. Are there any drawbacks or security risks to this?

    Read the article

  • UPS compatible with Linux box?

    - by Somebody still uses you MS-DOS
    I'm buying this unit from deal extreme: it's a bitorrent downloader, with NAS capability. I'm interested in sharing an external HD in it, with media and backup purposes. I'm afraid of energy problems (don't know if this is the correct term), corrupting my mounted drives (like after a storm), so I thought about buying an UPS that sends a "signal" to my Linux box, and a script in my Linux box would unmout everything to avoid problems. Do this "UPS signal" feature exists? Do you have model suggestions? Thanks!

    Read the article

  • How to make NFS mounts available while offline?

    - by lpanebr
    Problem: I work on a notebook and while at work I have access to many NFS mounted drives. When I get home they are obviously not available. Windows 7 solution: My business partner uses Windows 7 and maps the folders via samba. Windows 7 has a very nice feature that let's he make these folders available offline. So when when he connects to the work network the changes get synchronized! Question: Is there a way to mimic that in ubuntu? What I have now: Server to local sync: I have added rsync entries on my crontab to copy server folders => local folders every five minutes. When at work I used the NFS mapped folders and while outside work I use the local copies. When I get at work I manually run a script that syncs local folders => server folders. Problems with my setup: slow startup when not at work (I guess do to the fstab trying to map the server folders) no conflict checking/managing I have to remember to sync manually and be careful because of the different file locations recent files do not work between work and home

    Read the article

  • CPU/RAM usage log over a period of time to file on CentOS

    - by joel_gil
    Hi everyone Im looking for an app pr line of code that could let me observe a process, save the info in a number of variable and then put the gathered info on a file. Ive been trying with variations of top but no luck. I am running several CentOS virtual servers, VM is 2gb ram 2 processor. Maybe a script that works over a specified amount of time while writing lines with the info on a text file so at the end i can have a sort of table with the data. The thing is Im going to stress test the server and I would like to have the data to make some statistics. Any comments and suggestions are most welcome.

    Read the article

  • Apache has no rights to unzip file uploaded by DirectAdmin created FTP user

    - by FlyOn
    I know similar things have been asked a thousandfold, but I can't seem to figure it out.. I have recently started renting a VPS. It runs DirectAdmin. I created a reseller account in the admin account, and created a new user (with FTP account) with the reseller account. I logged in with FTP as the created user and uploaded a zip file and a php file which will extract the zip. But when running the script in the browser I get "Permission denied" errors however. Probably because Apache is not in the same group as the users FTP account. Now my question is: How can I solve this once and for all, so that I don't have to do it again for every new user account I create through DirectAdmin?

    Read the article

  • Creating a command that compress a file and save it on a usb, but cannot detect the usb in linux.

    - by Lance
    First of all I can't detect the USB on linux using the command line. I check the directory dev and still cannot find the usb. used the df command to check the usb. I plug and typed df and then unplug and typed df again and nothing changed. We are using a server(telnet) to use the command line of linux on a windows 7 OS. The second problem I have is how can I execute the bash script that I have made. It seems that I cant put my .sh file in /usr/bin/ I would like to make my command executable in all directories like a normal command. Sorry, im still newbie at this things. This is what I get on staying on Windows too much. Sorry for my english. Thank you in advance.

    Read the article

  • From the Coalface - 3 - Work as hard as you can to be as lazy as you can!

    - by TATWORTH
    The saga of the Change Log A recent conversation reminded me of the need for change logs within a database, to record when various change scripts were run. Creating such the required table is simple. A typical table for this consists of: Id - identity Integer primary key ChangeFileName - NVARCHAR(128) to hold the name of the file run. DateAdded - DateTime non-null with default value of getutcdate() Purpose - NVARCHAR(128) Rerunnable - Bit non-null default 0. By good design of the table only two data values normally need to be supplied. Two stored procedures, one for inserting data and one to list in reverse sequence the log complete the database essentials. The complete implementation can be found in the CommonData solution at http://CommonData.CodePlex.Com By including a call the add Change Log stored procedure, each script can log its name and purpose for posterity. The scripts that were applied to say the UAT system and their sequence of application can be readily identified for running on the Live system. Formatting XML XML is often produced as one continous string with no embedded CR/LF. To get it into human readable form, open it in visual studio, swap to another tab and back and click the format document button. The XML will then be nicely formatted!

    Read the article

  • Linux foxboard network monitor

    - by het.oosten
    I want to use a Foxboard a simple network monitor for multiple routers (all routers are connected to the internet). Foxboard is a mini pc with an embedded version of Debian. My idea is to use multiple virtual network devices like this: eth0 192.168.2.10 eth0:1 192.168.3.10 eth0:2 192.168.4.10 I found a nice Python script to ping an external host here (the solution from Ryan Cox): http://stackoverflow.com/questions/316866/ping-a-site-in-python Is it possible to configure Debian to use eth0 when I ping www.site-a.com and eth0:1 when I ping www.site-b.com?

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work? [closed]

    - by themoondothshine
    I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. An application is linked against libsome1.so. This application uses libdl.so to dynamically load another module, say libmagic.so. Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2), but nothing seems to work. What am I doing wrong?

    Read the article

  • How to use ccache selectively?

    - by Anonymous
    I have to compile multiple versions of an app written in C++ and I think to use ccache for speeding up the process. ccache howtos have examples which suggest to create symlinks named gcc, g++ etc and make sure they appear in PATH before the original gcc binaries, so ccache is used instead. So far so good, but I'd like to use ccache only when compiling this particular app, not always. Of course, I can write a shell script that will try to create these symlinks every time I want to compile the app and will delete them when the app is compiled. But this looks like filesystem abuse to me. Are there better ways to use ccache selectively, not always? For compilation of a single source code file, I could just manually call ccache instead of gcc and be done, but I have to deal with a complex app that uses an automated build system for multiple source code files.

    Read the article

  • How to detect an iPhone connecting a network?

    - by JayCrossler
    I've noticed through watching Wireshark that when an iPhone connects to a wifi network, it sends out a few IGMP/MDNS packets to 224.0.0.251 (LAN broadcast, I think). Is there any easy way to watch for these packets and then either run a script or send an event? Or, is the best way to just run a packet sniffer? Any simple ones that can send events or execute curl commands when a filter is triggered? When I run nc -u -l 5353 I get: My-Name-iPhonelocal??? x???)?? ??cc^C Can I do something like: nc -u -l 5353 | grep iPhonelocal | execute command...

    Read the article

  • I cannot change the grub Default item from OS-1, but I can from OS-2 (dual-boot 10.04 on both)

    - by fred.bear
    My 10.04 system (OS-1) got into a tangle the other day, so I installed a second, dual-boot 10.04 (OS-2), so that I could trouble-shoot the hung system... In case it is relevant to my question, I'll mention that since I got OS-1 working again, it has shown a few battle wounds from its ordeal (.. actually the ordeal was mine ... trying to figure it all out ;) ... I lost some custom settings, but not all. (For the curious: the hangup was caused by rsync writing 600 GB to OS-1's 320 GB drive.. The destination drive was unmounted at the time, and rsync dutifully wrote directly to /media/usb_back; filling it to capacity... I have since, ammended my script :) Because the dual-boot MBR was prepared by OS-2, it is first on the grub list.. However, I want OS-1 to be the default OS to boot... From OS-1, I tried two methods to change the grub-menu's defaule OS. eg. Directly editing /etc/default/grub (then update-grub) Running 'Startup Manager' (then update-grub) Neither of these methods had any effect... so I started OS-2, and tried method 1... It worked! Why can I not change the grub menu from OS-1? .. or if it can be done, How?

    Read the article

  • How to fundamentally approach creating a 'financial planner' application?

    - by Anonymous -
    I want to create a financial planning application (for personal use), for which the overall functionality will be this: User (me..) can create different 'scenarios'. Each scenario is configured with different incomings/outgoings. Scenarios can be 'explored' in a calendar format with projections taking into account tax, interest (on both debt and savings) and so on and so forth. My problem lies in how to fundamentally approach the project. I've considered: When creating incomings/outgoings for a script to apply them to each day in a 'days' table of a database, acting as a method of caching. This means that if I wanted to look at January 14th, 2074 there aren't thousands of cycles of calculations to run through and the result can just be pulled. Do each calculation dynamically, but again, I'm finding it hard to visuallize how I would handle different tax allowances (I'm based in the UK by the way), payrises and 'changes' to my incomings/outgoings. I've sat on this for a couple of days and am struggling to come up with an elegant approach to my problem. There may well be software out there that does what I'm looking to do (in fact I'm sure it is) but I would like to develop this myself for learning purposes, to be able to add it to my personal life 'toolset' and to allow me to expand on it in the future. Many thanks to all who have any input on my dilemna.

    Read the article

  • Finding the length of files and file path of directory structure in a Linux file system.

    - by Robert Nickens
    I have a problem on a Linux OS running a version of SMB where if the absolute path to a directory within a Shared Folder is greater than 1024 bytes and the filename component is greater than 256 bytes the SMB service crashes and locks out all other services for network access like, SSH and FTP rendering the machine mute. To keep the system for crashing I’ve temporarily moved a group of folders where I think the problem path may be located outside of Shared Folder. I need to find the file and file path that exceeded this limitation and rename them or remove them allowing me to return a bulk of the files to the Shared Folder. I’ve tried the find and grep commands without success. Is there a chain of commands or script that I can use to hunt down the offending files and directory? Please advise.

    Read the article

< Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >