Search Results

Search found 833 results on 34 pages for 'rm vanda'.

Page 5/34 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Remove a file on linux using the inode number

    - by WebDevHobo
    If you create a file on UNIX/linux with special chars, like touch \"la*, you can't remove it with rm "la*. You have to use the inode number(you can if you add the \ before the name, I know, but you'd have to guess as a user that it was used in the file creation). I checked the manpage for rm, but there's no metion of the inode number. Doing rm inodenumber doesn't work either. What is the command for this?

    Read the article

  • Cannot delete audit logs with sudo

    - by DazSlayer
    I am using auditctl to log all commands run on my Ubuntu system and I working on a script that parses the log into a more readable format. Since these logs tend to become very large, I want to periodically delete the logs. I found that by running sudo rm /var/log/audit/* I would get rm: cannot remove `/var/log/audit/*': No such file or directory however by running sudo su rm /var/log/audit/* The logs would be deleted without any problem. What could be the cause of this?

    Read the article

  • Command line safety tricks

    - by deadprogrammer
    Command line and scripting is dangerous. Make a little typo with rm -rf and you are in a world of hurt. Confuse prod with stage in the name of the database while running an import script and you are boned (if they are on the same server, which is not good, but happens). Same for noticing too late that the server name where you sshed is not what you thought it was after funning some commands. You have to respect the Hole Hawg. I have a few little rituals before running risky commands - like doing a triple take check of the server I'm on. Here's an interesting article on rm safety. What little rituals, tools and tricks keeps you safe on the command line? And I mean objective things, like "first run ls foo*, look at the output of that and then substitute ls with rm -rf to avoid running rm -rf foo * or something like that", not "make sure you know what the command will do".

    Read the article

  • Uninstalling Silverlight 4 beta on OS X

    - by Einar Ingebrigtsen
    I want to downgrade to SL3 on my Mac after accidently installing SL4 Beta. I've tried the SL3 uninstall procedure: rm -rf /Library/Internet\ Plug-Ins/Silverlight.plugin rm -rf /Library/Receipts/Silverlight*.pkg rm -rf ~/Library/Application\ Support/Microsoft/Silverlight But still get an error message when I try to install SL3 saying there is a newer version there. Anyone got any input on how to do this ?

    Read the article

  • Conditional blocks of code in linux bash

    - by Arek
    Nearly everybody knows very useful && and || operators, for example: rm myf && echo "File is removed successfully" || echo "File is not removed" I've got a question: how to put a block of commands after && or || operators without using the function? For example I want to do: rm myf && \ echo "File is removed successfully" \ echo "another command executed when rm was successful" || \ echo "File is not removed" \ echo "another command executed when rm was NOT successful" What is the proper syntax of that script?

    Read the article

  • Website hosted on IIS is not accessbile

    - by Tola Odejayi
    I have two sites set up in IIS on a remote machine RM; one on regular port 80, and the other on port 5773. From my local machine LM, I can access the site on 80, but I cannot access the one on 5773; I get a status code of 502 and an error code of 10060 (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) when I try to do this. I can access the 5773 site via IIS when I am logged into RM (i.e. by right clicking on a page on the site and going 'Browse'). I can also access pages on the 5773 site via a browser, again when I am logged into RM. I just can't do the same via a browser when I am logged into LM. I have ensured that port 5773 is open for outgoing traffic on LM. Could the problem be that I also need to ensure that port 5773 is open for inbound traffic on RM?

    Read the article

  • Log Problem and bash script

    - by GvWorker
    Hello Guys, I have 11 Debian servers running on rackspace cloud hosting. All running VHCS2 for hosting management. 1 server is used for application and 10 are used for only smtp. My question is regarding smtp servers. Each server hosted 1 domain. My problem is when my client use smtp there's a log created in this directory /var/log/ but within 24 hours drives are full and server refuse all smtp connections. Then I deleted the logs and ran following command to check the disk space. df -h but it shows hdd still full and server is still refusing the smtp connections. Then I ran following command to see the truth du --max-depth=1 -h It shows the truth. The real disk space used. Then I rebooted the server and now server working fine. But after few hours same situation happened. Then I created the following script. #!/bin/sh rm -fr /var/log/* rm -fr /var/log/apache2/*.log rm -fr /var/log/apache2/*.log.* rm -fr /var/log/apache2/users/* rm -fr /var/log/apache2/backup/* reboot It worked for days but after that logs are again filling the hdd. Now I want the following solutions. If anybody can help me. When I delete files from server hdd will free up without rebooting Log should be in specific range. Like a specific size of file where old data overwrite with new data

    Read the article

  • How to remove a directory which looks corrupted

    - by hap497
    I am using Ubuntu 9.10. When I examine a directory, it shows as '?' for user/ownership. How can I remove it? -rw-r--r-- 1 hap497 hap497 1822 2010-01-28 22:48 IntSizeHash.h d????????? ? ? ? ? ? .libs/ -rw-r--r-- 1 hap497 hap497 194 2010-02-25 12:12 libwebkit_1_0_la-BitmapImage.lo I have tried rm and sudo rm but get an error: $ sudo rm -Rf .libs rm: cannot remove `.libs': Input/output error Thank you for any pointers.

    Read the article

  • HTTP error code 405: tomcat Url mapping issue

    - by Andrew
    I am having trouble POSTing to my java HTTPServlet. I am getting "HTTP Status 405 - HTTP method GET is not supported by this URL" from my tomcat server". When I debug the servlet the login method is never called. I think it's a url mapping issue within tomcat... web.xml <servlet-mapping> <servlet-name>faxcom</servlet-name> <url-pattern>/faxcom/*</url-pattern> </servlet-mapping> FaxcomService.java @Path("/rest") public class FaxcomService extends HttpServlet{ private FAXCOM_x0020_ServiceLocator service; private FAXCOM_x0020_ServiceSoap port; @GET @Produces("application/json") public String testGet() { return "{ \"got here\":true }"; } @POST @Path("/login") @Consumes("application/json") // @Produces("application/json") public Response login(LoginBean login) { ArrayList<ResultMessageBean> rm = new ArrayList<ResultMessageBean>(10); try { service = new FAXCOM_x0020_ServiceLocator(); service.setFAXCOM_x0020_ServiceSoapEndpointAddress("http://cd-faxserver/faxcom_ws/faxcomservice.asmx"); service.setMaintainSession(true); // enable sessions support port = service.getFAXCOM_x0020_ServiceSoap(); rm.add(new ResultMessageBean(port.logOn( "\\\\CD-Faxserver\\FaxcomQ_API", /* path to the queue */ login.getUserName(), /* username */ login.getPassword(), /* password */ login.getUserType() /* 2 = user conf user */ ))); } catch (RemoteException e) { e.printStackTrace(); } catch (ServiceException e) { e.printStackTrace(); } // return rm; return Response.status(201).entity(rm).build(); } @POST @Path("/newFaxMessage") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public ArrayList<ResultMessageBean> newFaxMessage(FaxBean fax) { ArrayList<ResultMessageBean> rm = new ArrayList<ResultMessageBean>(); try { rm.add(new ResultMessageBean(port.newFaxMessage( fax.getPriority(), /* priority: 0 - low, 1 - normal, 2 - high, 3 - urgent */ fax.getSendTime(), /* send time */ /* "0.0" - immediate */ /* "1.0" - offpeak */ /* "9/14/2007 5:12:11 PM" - to set specific time */ fax.getResolution(), /* resolution: 0 - low res, 1 - high res */ fax.getSubject(), /* subject */ fax.getCoverpage(), /* cover page: "" – default, “(none)� – no cover page */ fax.getMemo(), /* memo */ fax.getSenderName(), /* sender's name */ fax.getSenderFaxNumber(), /* sender's fax */ fax.getRecipients().get(0).getName(), /* recipient's name */ fax.getRecipients().get(0).getCompany(), /* recipient's company */ fax.getRecipients().get(0).getFaxNumber(), /* destination fax number */ fax.getRecipients().get(0).getVoiceNumber(), /* recipient's phone number */ fax.getRecipients().get(0).getAccountNumber() /* recipient's account number */ ))); if (fax.getRecipients().size() > 1) { for (int i = 1; i < fax.getRecipients().size(); i++) rm.addAll(addRecipient(fax.getRecipients().get(i))); } } catch (RemoteException e) { // TODO Auto-generated catch block e.printStackTrace(); } return rm; } } Main.java private static void main(String[] args) { try { URL url = new URL("https://andrew-vm/faxcom/rest/login"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("POST"); conn.setDoOutput(true); conn.setRequestProperty("Content-Type", "application/json"); FileInputStream jsonDemo = new FileInputStream("login.txt"); OutputStream os = (OutputStream) conn.getOutputStream(); os.write(IOUtils.toByteArray(jsonDemo)); os.flush(); if (conn.getResponseCode() != 200) { throw new RuntimeException("Failed : HTTP error code : " + conn.getResponseCode()); } BufferedReader br = new BufferedReader(new InputStreamReader( (conn.getInputStream()))); String output; System.out.println("Output from Server .... \n"); while ((output = br.readLine()) != null) { System.out.println(output); } // Don't want to disconnect - servletInstance will be destroyed // conn.disconnect(); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } I am working from this tutorial: http://www.mkyong.com/webservices/jax-rs/restfull-java-client-with-java-net-url/

    Read the article

  • hello-1.mod.c:14: warning: missing initializer (near initialization for '__this_module.arch.unw_sec_init')

    - by Sompom
    I am trying to write a module for an sbc1651. Since the device is ARM, this requires a cross-compile. As a start, I am trying to compile the "Hello Kernel" module found here. This compiles fine on my x86 development system, but when I try to cross-compile I get the below error. /home/developer/HelloKernel/hello-1.mod.c:14: warning: missing initializer /home/developer/HelloKernel/hello-1.mod.c:14: warning: (near initialization for '__this_module.arch.unw_sec_init') Since this is in the .mod.c file, which is autogenerated I have no idea what's going on. The mod.c file seems to be generated by the module.h file. As far as I can tell, the relevant parts are the same between my x86 system's module.h and the arm kernel header's module.h. Adding to my confusion, this problem is either not googleable (by me...) or hasn't happened to anyone before. Or I'm just doing something clueless that anyone with any sense wouldn't do. The cross-compiler I'm using was supplied by Freescale (I think). I suppose it could be a problem with the compiler. Would it be worth trying to build the toolchain myself? Obviously, since this is a warning, I could ignore it, but since it's so strange, I am worried about it, and would like to at least know the cause... Thanks very much, Sompom Here are the source files hello-1.mod.c #include <linux/module.h> #include <linux/vermagic.h> #include <linux/compiler.h> MODULE_INFO(vermagic, VERMAGIC_STRING); struct module __this_module __attribute__((section(".gnu.linkonce.this_module"))) = { .name = KBUILD_MODNAME, .init = init_module, #ifdef CONFIG_MODULE_UNLOAD .exit = cleanup_module, #endif .arch = MODULE_ARCH_INIT, }; static const struct modversion_info ____versions[] __used __attribute__((section("__versions"))) = { { 0x3972220f, "module_layout" }, { 0xefd6cf06, "__aeabi_unwind_cpp_pr0" }, { 0xea147363, "printk" }, }; static const char __module_depends[] __used __attribute__((section(".modinfo"))) = "depends="; hello-1.c (modified slightly from the given link) /* hello-1.c - The simplest kernel module. * * Copyright (C) 2001 by Peter Jay Salzman * * 08/02/2006 - Updated by Rodrigo Rubira Branco <[email protected]> */ /* Kernel Programming */ #ifndef MODULE #define MODULE #endif #ifndef LINUX #define LINUX #endif #ifndef __KERNEL__ #define __KERNEL__ #endif #include <linux/module.h> /* Needed by all modules */ #include <linux/kernel.h> /* Needed for KERN_ALERT */ static int hello_init_module(void) { printk(KERN_ALERT "Hello world 1.\n"); /* A non 0 return means init_module failed; module can't be loaded.*/ return 0; } static void hello_cleanup_module(void) { printk(KERN_ALERT "Goodbye world 1.\n"); } module_init(hello_init_module); module_exit(hello_cleanup_module); MODULE_LICENSE("GPL"); Makefile export ARCH:=arm export CCPREFIX:=/opt/freescale/usr/local/gcc-4.4.4-glibc-2.11.1-multilib-1.0/arm-fsl-linux-gnueabi/bin/arm-linux- export CROSS_COMPILE:=${CCPREFIX} TARGET := hello-1 WARN := -W -Wall -Wstrict-prototypes -Wmissing-prototypes -Wno-sign-compare -Wno-unused -Werror UNUSED_FLAGS := -std=c99 -pedantic EXTRA_CFLAGS := -O2 -DMODULE -D__KERNEL__ ${WARN} ${INCLUDE} KDIR ?= /home/developer/src/ltib-microsys/ltib/rpm/BUILD/linux-2.6.35.3 ifneq ($(KERNELRELEASE),) # kbuild part of makefile obj-m := $(TARGET).o else # normal makefile default: clean $(MAKE) -C $(KDIR) M=$$PWD .PHONY: clean clean: -rm built-in.o -rm $(TARGET).ko -rm $(TARGET).ko.unsigned -rm $(TARGET).mod.c -rm $(TARGET).mod.o -rm $(TARGET).o -rm modules.order -rm Module.symvers endif

    Read the article

  • SSH connection times out unless I tunnel in from a different server-

    - by rm-vanda
    OK, so this just started last week - Whenever we try to connect to our server via ssh (we use sftp, as well) - The connection times out. However, when you ssh to any other server and then ssh into the machine - it works flawlessly. Now, the mindblowing thing is that sometimes the ssh connection will succeed. Moments ago, I tried it from another machine, and then my own, and it worked - only to time out the next go around. Last week, simply restarting the ssh daemon worked, but this week, no such luck. I even went in and changed: /etc/hosts.allow ALL : ALL and /etc/hosts.deny is blank. The firewall config hasn't changed - but I even disabled the firewall to see if that would work - It did, for a moment - before cutting off, again. (ufw is set to "ALLOW" not "LIMIT") When I try SSH'ing in from my phone -- it works, fine -- So, it seems the problem is with our ISP/router/gateway - However, I see no log in the router/gateway that says its blocking our connections - And that wouldn't explain why we can SSH into any other server -- except for this one - from our network --- I truly appreciate any insight that anyone may have on this matter -

    Read the article

  • How do I create a permanent Bash alias?

    - by Bakhtiyor
    I would like to create an alias to rm command in order to have a confirmation message after executing this command. So I am creating an alias like this alias rm='rm -i'. But as far as I know this is a temporary alias and it lives until you close the terminal. As it is explained here to save alias permanently I need to execute ~/.bash_aliases or ~/.bashrc commands in terminal and add my alias there. But when I execute ~/.bashrc I get following error message : bash: /home/bakhtiyor/.bashrc: Permission denied When I run ~/.bash_aliases I get another error message like this: bash: /home/bakhtiyor/.bash_aliases: File or directory doesn't exist. What is the actual problem and how can I solve it?

    Read the article

  • Ubuntu One Bookmark sync not working.

    - by Rob
    Everything in Ubuntu One sync works great except bookmark sync. I tried the wiki answer that said to run: killall beam.smp beam rm ~/.config/desktop-couch/desktop-couchdb.ini dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort This is what my terminal came back with: robin@robin-MIDWAY:~$ killall beam.smp beam beam: no process found robin@robin-MIDWAY:~$ rm ~/.config/desktop-couch/desktop-couchdb.ini rm: cannot remove `/home/robin/.config/desktop-couch/desktop-couchdb.ini': No such file or directory robin@robin-MIDWAY:~$ dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort Error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. robin@robin-MIDWAY:~$ I'm a computer "newbie" so it's possible I'm doing something wrong, are there any tutorials out there on how to use the CouchDB? I have Bindwood installed.

    Read the article

  • Solaris 11 JDK installation

    - by user20121221
    JAVA -JDK install on Solaris 11 pkg list -s jdk-7 pkg install jdk-7 ls /usr/jdk/instances jdk1.5.0 jdk1.6.0 mv jdk1.7.0_07 /usr/jdk/instances/jdk1.7.0 ls -l total 9 drwxr-xr-x 5 root bin 5 Sep 4 16:08 instances lrwxrwxrwx 1 root root 18 Jul 14 11:11 jdk1.6.0_26 -> instances/jdk1.6.0 lrwxrwxrwx 1 root root 8 Sep 4 16:09 latest -> jdk1.6.0 drwxr-xr-x 4 root bin 4 Jul 14 11:04 packages rm latest ln -s instances/jdk1.7.0 jdk1.7.0_07 ln -s jdk1.7.0_07 latest cd /usr/ ls -ld java lrwxrwxrwx 1 root root 12 Sep 4 16:12 java -> jdk/jdk1.6.0 rm java ln -s jdk/latest java java -version java version "1.7.0_07"Java(TM) SE Runtime Environment (build 1.7.0_07-b10)Java HotSpot(TM) Server VM (build 23.3-b01, mixed mode) stop firefox cd /usr/lib/firefox/plugins rm libnpjp2.so ln -s /usr/jdk/jdk1.7.0_07/jre/lib/i386/libnpjp2.so libnpjp2.so start firefox

    Read the article

  • How to permanently remove xcuserdata under the project.xcworkspace and resolve uncommitted changes

    - by JeffB6688
    I am struggling with a problem with a merge conflict (see Cannot Merge due to conflict with UserInterfaceState.xcuserstate). Based on feedback, I needed to remove the UserInterfaceState.xcuserstate using git rm. After considerable experimentation, I was able to remove the file with "git rm -rf project.xcworkspace/xcuserdata". So while I was on the branch I was working on, it almost immediately came back as a file that needed to be committed. So I did the git rm on the file again and just switched back to the master. Then I performed a git rm on the file again. The operation again removed the file. But I am still stuck. If I try to merge the branch into the master branch, it again says that I have uncommitted changes. So I go to commit the change. But this time, it shows UserInterfaceState.xcuserstate as the file to commit, but the box is unchecked and it can't be checked. So I can't move forward. Is there a way to use 'git rm' to permanently remove xcuserdata under the project.xcworkspace? Help!! Any ideas?

    Read the article

  • Uninstalling Silverligh 4 beta on OSX [closed]

    - by Einar Ingebrigtsen
    I want to downgrade to SL3 on my Mac after accidently installing SL4 Beta. I've tried the SL3 uninstall procedure: rm -rf /Library/Internet\ Plug-Ins/Silverlight.plugin rm -rf /Library/Receipts/Silverlight*.pkg rm -rf ~/Library/Application\ Support/Microsoft/Silverlight But still get an error message when I try to install SL3 saying there is a newer version there. Anyone got any input on how to do this ?

    Read the article

  • How do you save and retrieve a Key/IV pair securely?

    - by Shawn Steward
    I'm using VB.Net's RijndaelManaged (RM) to encrypt files, using the RM.GenerateKey and RM.GenerateIV methods to generate the Key and IV and encrypting the file using the CryptoStream class. I'm planning on saving this Key and IV to a file and want to make sure I'm doing it the right way. I am combining the IV+Key, and encrypting that with my RSA Public key and writing it out to a file. Then, to decrypt I use the RSA Private key on this file to get the IV+Key, split them up and set RM.Key and RM.IV to these values and run the decryptor. Is this the best method to accomplish this, or is there a preferred method for saving the IV & Key? Also, what's the best way to construct and deconstruct the byte array? I used the .Concat method to join them together and that seems to work well but I can't seem to find something as easy to deconstruct it. I played with the .Take method that takes the first x # of bytes and it works for the first part but can't find anything that gets the rest of it.

    Read the article

  • gcc does not resolve extern global variables, with or without -c option

    - by Moons
    Hello everyone! So i have this issue : i am declaring some extern global variables in my C program. If I don't use the -c option for gcc, i get undefined references errors. But with that -c option, the linking is not done, which means that i don't have an executable generated. So how do I solve this? Here is my makefile. As I am not good with writing makefiles, I took one from another project then I changed a few things. So maybe I'm missing something here. # Makefile calculPi INCL = -I$(INCL_DIR) DEFS = -D_DEBUG_ CXX_FLAGS =-g -c -lpthread -lm CXX = gcc $(CXX_FLAGS) $(INCL) $(DEFS) LINK_CXX = gcc OBJ = approx.o producteur.o sequentialApproximation.o main.o LINKOBJ = approx.o producteur.o sequentialApproximation.o main.o BIN = calculPi.exe RM = rm -fv all: calculPi.exe clean: ${RM} *\~ \#*\# $(OBJ) clean_all: clean ${RM} $(BIN) cleanall: clean ${RM} $(BIN) $(BIN): $(OBJ) $(CXX) $(LINKOBJ) -o "calculPi.exe" main.o: main.c $(CXX) main.c -o main.o $(CXX_FLAGS) approx.o: approx.c approx.h $(CXX) -c approx.c -o approx.o $(CXX_FLAGS); producteur.o: producteur.c producteur.h $(CXX) -c producteur.c -o producteur.o $(CXX_FLAGS); sequentialApproximation.o : sequentialApproximation.c sequentialApproximation.h $(CXX) -c sequentialApproximation.c -o sequentialApproximation.o $(CXX_FLAGS);

    Read the article

  • Is there a faster method then StringBuilder for a max 9-10 step string concatenation?

    - by Pentium10
    I have this code to concate some array elements: StringBuilder sb = new StringBuilder(); private RatedMessage joinMessage(int step, boolean isresult) { sb.delete(0, sb.length()); for (int i = 0; i <= step; i++) { if (mStack[i] == null) continue; rm = mStack[i].getCurrentMsg(); if (rm == null || rm.msg.length() == 0) continue; if (sb.length() != 0) { sb.append(", "); } sb.append(rm.msg); } return sb.toString(); } Important the array holds max 10 items, so it's not quite much. My trace output tells me this method is called 18864 times, 16% of the runtime was spent in this method. Can I optimize more?

    Read the article

  • The linking is not done (gcc compilation)

    - by Moons
    Hello everyone! So i have this issue : i am declaring some extern global variables in my C program. If I don't use the -c option for gcc, i get undefined references errors. But with that -c option, the linking is not done, which means that i don't have an executable generated. So how do I solve this? Here is my makefile. As I am not good with writing makefiles, I took one from another project then I changed a few things. So maybe I'm missing something here. # Makefile calculPi INCL = -I$(INCL_DIR) DEFS = -D_DEBUG_ CXX_FLAGS =-g -c -lpthread -lm CXX = gcc $(CXX_FLAGS) $(INCL) $(DEFS) LINK_CXX = gcc OBJ = approx.o producteur.o sequentialApproximation.o main.o LINKOBJ = approx.o producteur.o sequentialApproximation.o main.o BIN = calculPi.exe RM = rm -fv all: calculPi.exe clean: ${RM} *\~ \#*\# $(OBJ) clean_all: clean ${RM} $(BIN) cleanall: clean ${RM} $(BIN) $(BIN): $(OBJ) $(CXX) $(LINKOBJ) -o "calculPi.exe" main.o: main.c $(CXX) main.c -o main.o $(CXX_FLAGS) approx.o: approx.c approx.h $(CXX) -c approx.c -o approx.o $(CXX_FLAGS); producteur.o: producteur.c producteur.h $(CXX) -c producteur.c -o producteur.o $(CXX_FLAGS); sequentialApproximation.o : sequentialApproximation.c sequentialApproximation.h $(CXX) -c sequentialApproximation.c -o sequentialApproximation.o $(CXX_FLAGS);

    Read the article

  • Make multiple targets in 'all'

    - by Hiett
    i'm trying to build a debug and release version of a library with a Makefile and copy those libraries to the relevant build directories, e.g. .PHONY: all clean distclean all: $(program_NAME_DEBUG) $(CP) $(program_NAME_DEBUG) $(BUILD_DIR)/debug/$(program_NAME_DEBUG) $(RM) $(program_NAME_DEBUG) $(RM) $(program_OBJS) $(program_NAME_RELEASE) $(CP) $(program_NAME_RELEASE) $(BUILD_DIR)/release/$(program_NAME_RELEASE) $(RM) $(program_NAME_RELEASE) $(RM) $(program_OBJS) $(program_NAME_DEBUG): $(program_OBJS) $(LINK_DEBUG.c) -shared -Wl,-soname,$(program_NAME_DEBUG) $(program_OBJS) -o $(program_NAME_DEBUG) $(program_NAME_RELEASE): $(program_OBJS) $(LINK_RELEASE.c) -shared -Wl,-soname,$(program_NAME_RELEASE) $(program_OBJS) -o $(program_NAME_RELEASE) The 1st target in all (program_NAME_DEBUG) compiles OK but the 2nd, (program_NAME_RELEASE) produces the following error: libGlamdring_rel.so make: libGlamdring_rel.so: Command not found make: *** [all] Error 127 libGlamdring_rel.so is the value of program_NAME_RELEASE It doesn't seem to be recognising the 2nd target as it does the 1st?

    Read the article

  • find: missing argument to -exec

    - by Abs
    Hello all, I was helped out today with a command, but it doesn't seem to be working. This is the command: find /home/me/download/ -type f -name "*.rm" -exec ffmpeg -i {} -sameq {}.mp3 && rm {}\; The shell returns find: missing argument to `-exec' What I am basically trying to do is go through a directory recursively (if it has other directories) and run the ffmpeg command on the .rm file types and convert them to .mp3 file types. Once this is done, remove the .rm file that has just been converted. I appreciate any help on this.

    Read the article

  • broadcom BCM4318 firmware missing

    - by Lucas
    I tried sudo apt-get update && apt-cache search b43 sudo apt-get install firmware-b43-installer after that i get this error Can't update, Some index files failed to download i tried cd /var/lib/apt/lists/partial sudo rm gb.archive.ubuntu.com_ubuntu_dists_natty_multivers e_source_Sources OR sudo rm /var/lib/apt/lists/partial/gb.archive.ubuntu.com_ubuntu_dists_natty_multivers e_source_Sources another error cannot remove e_source_Sources

    Read the article

  • Cannot launch software centre, neither update

    - by Michal
    m@samsung:~$ sudo rm /var/lib/apt/lists/* -vf [sudo] password for m: rm: cannot remove `/var/lib/apt/lists/partial': Is a directory m@samsung:~$ sudo apt-get update N: Ignoring file 'gnomebaker.lis' in directory '/etc/apt/sources.list.d/' as it has an invalid filename extension E: Malformed line 1 in source list /etc/apt/sources.list.d/gnomebaker.list (URI parse) E: The list of sources could not be read. m@samsung:~$

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >