Search Results

Search found 19746 results on 790 pages for 'local dependency'.

Page 128/790 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • How do I turn a Wi-Fi "hotspot" into a local wired network?

    - by Max Schmeling
    Here's the situation: In a remote "office" I have a computer with no network connection, that I need to network with when I'm at this remote office. There is a wireless network where this computer is, but no wireless adapter in the computer. I have a laptop running Windows 7 that can connect to the wireless, and the computer is running Windows Vista. What is the best way to get them both connected? I know I can buy a USB wireless adapter or something for the computer, but is there an easy way to do it with what I've got?

    Read the article

  • How to redirect / route VPN traffic to back standing local network?

    - by Milkywayfarer
    There are two computers one "HOME" with Ubuntu 10.10 installed, and another "WORK" with WinXP installed. WORK PC is behind draconian firewall. However, let's imagine, that there is VPN connection installed between this 2 work stations, for example, with teamviewer, hamachi, openvpn, or by some other mean (by the way, what is the best mean for such purposes?). One is interested in working with WORK's LAN resources from his HOME computer via VPN. So my question is about configuration required to be done on WinXP machine (or, maybe on both machines), to make such interaction possible? I'm guessing that some routing stuff should be performed somewhere. But I don't know what exactly and how to do?

    Read the article

  • Nginx works on my linux machine but is not accessible from other computers in my local network

    - by crooveck
    In my LAN network I have a server with Scientific Linux (RedHat or Fedora based distro), I've done yum install nginx but the welcome page is not accessible from other computers in my network. When I do telnet open localhost 80 and then GET / HTTP/1.0 I get some html code from nginx, so it's running for sure. But when I want to connect remotly, doing telnet open 192.168.3.130 80 I get: Trying 192.168.3.130... telnet: Unable to connect to remote host: No route to host So I assume that there is something wrong with my network settings, maybe iptables or something else? Next step, I turned off iptables: service iptables stop and it helped, now I can connect remotely using telnet. So I think, I need to fix my iptables rules. I did some googling and found this rule -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT but it still didn't allow me to connect remotely when iptables is up. Can someone please help me setting a proper iptables configuration?

    Read the article

  • How to access files on another local Windows 7 computer without using any native Windows features?

    - by user1356682
    I do not want to use any native Windows features, services, nor anything do to this. It needs to be a standalone program, with ZERO Windows dependencies. Just like TeamViewer does not use any native Windows features, so I want to be able to access files and folders in a standalone program. No remote desktop No VNC type programs No Windows File Sharing No Shared Folders in Windows No internet connection required It needs ability to view, edit, transfer files at normal network rates.

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to work?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • Postergres could not connect to server

    - by Gary Lai
    After I did brew update and brew upgrade, my postgres got some problem. I tried to uninstall postgres and install again, but it didn't work as well. This is the error message.(I also got this error message when I try to do rake db:migrate) $ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? How can I solve it? Mac version: Mountain lion. homebrew version: 0.9.3 postgres version: psql (PostgreSQL) 9.2.1 And this is what I did. 12:30 ~/D/works$ brew uninstall postgresql Uninstalling /usr/local/Cellar/postgresql/9.2.1... 12:31 ~/D/works$ brew uninstall postgresql Uninstalling /usr/local/Cellar/postgresql/9.1.4... 12:31 ~/D/works$ psql --version bash: /usr/local/bin/psql: No such file or directory 12:33 ~/D/works$ brew install postgresql ==> Downloading http://ftp.postgresql.org/pub/source/v9.2.1/postgresql-9.2.1.tar.bz2 Already downloaded: /Library/Caches/Homebrew/postgresql-9.2.1.tar.bz2 ...... ...... ==> Summary /usr/local/Cellar/postgresql/9.2.1: 2814 files, 38M, built in 2.7 minutes 12:37 ~/D/works$ initdb /usr/local/var/postgres -E utf8 The files belonging to this database system will be owned by user "laigary". This user must also own the server process. The database cluster will be initialized with locale "en_US.UTF-8". The default text search configuration will be set to "english". initdb: directory "/usr/local/var/postgres" exists but is not empty If you want to create a new database system, either remove or empty the directory "/usr/local/var/postgres" or run initdb with an argument other than "/usr/local/var/postgres". 12:39 ~/D/works$ mkdir -p ~/Library/LaunchAgents 12:39 ~/D/works$ cp /usr/local/Cellar/postgresql/9.2.1/homebrew.mxcl.postgresql.plist ~/Library/LaunchAgents/ 12:39 ~/D/works$ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist homebrew.mxcl.postgresql: Already loaded 12:39 ~/D/works$ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start server starting 12:39 ~/D/works$ env ARCHFLAGS="-arch x86_64" gem install pg Building native extensions. This could take a while... Successfully installed pg-0.14.1 1 gem installed 12:42 ~/D/works$ psql --version psql (PostgreSQL) 9.2.1 12:42 ~/D/works$ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

    Read the article

  • Maven not setting classpath for dependencies properly

    - by Matthew
    OS name: "linux" version: "2.6.32-27-generic" arch: "i386" Family: "unix" Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700) Java version: 1.6.0_20 I am trying to use the mysql dependency in with maven in ubuntu. If I move the "mysql-connector-java-5.1.14.jar" file that maven downloaded into my $JAVA_HOME/jre/lib/ext/ folder, everything is fine when I run the jar. I think I should be able to just specify the dependency in the pom.xml file and maven should take care of setting the classpath for the dependency jars automatically. Is this incorrect? My pom.xml file looks like this: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.ion.common</groupId> <artifactId>TestPreparation</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>TestPrep</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>com.ion.common.App</mainClass> </manifest> </archive> </configuration> </plugin> </plugins> </build> <dependencies> <!-- JUnit testing dependency --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <!-- MySQL database driver --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.14</version> <scope>compile</scope> </dependency> </dependencies> </project> The command "mvn package" builds it without any problems, and I can run it, but when the application attempts to access the database, this error is presented: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:186) at com.ion.common.Functions.databases(Functions.java:107) at com.ion.common.App.main(App.java:31) The line it is failing on is: Class.forName("com.mysql.jdbc.Driver"); Can anyone tell me what I'm doing wrong or how to fix it?

    Read the article

  • Failed to load resource: the server responded with a status of 406 (Not Acceptable)

    - by skip
    I am trying to send json data and get back json data as well. I've <annotation-driven /> configured in my servlet-context.xml and I am using Spring framework version 3.1.0.RELEASE. When I send the request the browser tells me that it is not happy with the data returned from the server and gives me 406 error. And when I see the response from the server I see the whole 406 page returned by tomcat 6 server. I have got following in my pom for Jackson/Jackson processor: <!-- Jackson --> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-core-asl</artifactId> <version>1.9.4</version> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-asl</artifactId> <version>1.9.4</version> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-xc</artifactId> <version>1.9.4</version> </dependency> <!-- Jackson JSON Processor --> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-asl</artifactId> <version>1.8.1</version> </dependency> Following is the jquery code I am using to send the json request: $(function(){$(".sutmit-button").click(function(){ var dataString=$("#app-form").serialize(); $.ajax({ type:"POST", url:"apply.json", data:dataString, success:function(data, textStatus, jqXHR){ console.log(jqXHR.status); console.log(data); $('.postcontent').html("<div id='message'></div>"); $('#message').html("<h3>Request Submitted</h3>").append("<p>Thank you for submiting your request.</p>").hide().fadeIn(1500,function(){$('#message');}); }, error:function(jqXHR, textStatus, errorThrown) { console.log(jqXHR.status); $('.postcontent').html("<div id='message'></div>"); $('#message').html("<h3>Request failed.</h3>").hide().fadeIn(1500,function(){$('#message');}); }, dataType: "json" }); return false;});}); Following is the Controller that is handling the request: @RequestMapping(method = RequestMethod.POST, value="apply", headers="Accept=application/json") public @ResponseBody Application processFranchiseeApplicationForm(HttpServletRequest request) { //... Application application = new Application(...); //... logger.debug(application); return application; } I am not able to figure out the reason why I might be getting this error. Could someone help me understand why am I getting the given error? Thanks.

    Read the article

  • unable to install anything that depends upon spamassassin. Cant even install spamassasin

    - by Harbhag
    I am trying to install mailscanner using apt-get install mailscanner and I got the following error Setting up spamassassin (3.3.1-1) ... Starting SpamAssassin Mail Filter Daemon: child process [21344] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 2588. invoke-rc.d: initscript spamassassin, action "start" failed. dpkg: error processing spamassassin (--configure): subprocess installed post-installation script returned error exit status 255 dpkg: dependency problems prevent configuration of mailscanner: mailscanner depends on spamassassin (>= 3.1); however: Package spamassassin is not configured yet. dpkg: error processing mailscanner (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: spamassassin mailscanner E: Sub-process /usr/bin/dpkg returned an error code (1) and when I tried to install spamassassin I got the following error : Setting up spamassassin (3.3.1-1) ... Starting SpamAssassin Mail Filter Daemon: child process [21389] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 2588. invoke-rc.d: initscript spamassassin, action "start" failed. dpkg: error processing spamassassin (--configure): subprocess installed post-installation script returned error exit status 255 dpkg: dependency problems prevent configuration of mailscanner: mailscanner depends on spamassassin (>= 3.1); however: Package spamassassin is not configured yet. dpkg: error processing mailscanner (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: spamassassin mailscanner E: Sub-process /usr/bin/dpkg returned an error code (1) I am using Ubuntu Server 10.04

    Read the article

  • How to install VLC? When i get this error?

    - by YumYumYum
    How to install VLC? (with error showing such). root@sun-desktop:/var/tmp# apt-get install vlc Reading package lists... Done Building dependency tree Reading state information... Done vlc is already the newest version. The following packages were automatically installed and are no longer required: liblash3 libreoffice-l10n-common libgsf-1-common libcutter-dev pocketsphinx-hmm-wsj1 libfluidsynth1 libftgl2 projectm-data libprojectm-qt1 libgnomevfs2-extra libbml0 libprojectm2 libpocketsphinx1 libsphinxbase1 buzztard-data libbabl-0.0-0 libgegl-0.0-0 libhal1 libgsf-1-114 libsidplay1 pocketsphinx-utils liboil0.3 pocketsphinx-lm-wsj libcutter0 cutter-testing-framework-bin Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 239 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up vlc-nox (1.1.9-1ubuntu1.3) ... /var/lib/dpkg/info/vlc-nox.postinst: 10: /usr/lib/vlc/vlc-cache-gen: not found dpkg: error processing vlc-nox (--configure): subprocess installed post-installation script returned error exit status 127 dpkg: dependency problems prevent configuration of vlc: vlc depends on vlc-nox (= 1.1.9-1ubuntu1.3); however: Package vlc-nox is not configured yet. dpkg: error processing vlc (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: vlc-nox vlc E: Sub-process /usr/bin/dpkg returned an error code (1) # sudo apt-get autoremove vlc vlc-nox Reading package lists... Done Building dependency tree Reading state information... Done Package vlc is not installed, so not removed Package vlc-nox is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 237 not upgraded.

    Read the article

  • Cannot uninstall myspell-he

    - by Yuval Rabinovich
    I tried to install a Hebrew spell checker for LibreOffice. I downloaded a package named myspell-he_1.1-1_all.deb and tried to install it through the software center, installation failed and I can neither complete installation nor remove it. Since then, whenever I run Update Manager I get an error message: The package system is broken, In details: The following packages have unmet dependencies: dictionaries-common: When I try to run the Software center I get a pop-up window: Items cannot be installed or removed until the package catalog is repaired. A Repair option is suggested. If I click it the following nessage appears: installArchives() failed: dpkg: dependency problems prevent configuration of myspell-he: dictionaries-common (1.12.1ubuntu2) breaks myspell-he (<= 1.1-1) and is installed. Version of myspell-he to be configured is 1.1-1. dpkg: error processing myspell-he (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: myspell-he Error in function: dpkg: dependency problems prevent configuration of myspell-he: dictionaries-common (1.12.1ubuntu2) breaks myspell-he (<= 1.1-1) and is installed. Version of myspell-he to be configured is 1.1-1. dpkg: error processing myspell-he (--configure): dependency problems - leaving unconfigured How can I clear this mess?

    Read the article

  • Issues printing through ssh tunnel and port forwarding

    - by simogasp
    I'm having some problems trying to print through a ssh tunnel. I'd like to print from my laptop to a network printer (Toshiba es453, for what matters) which is in a local network. I can reach the local network using a gateway. So far I did the following: ssh -N -L19100:<Printer_IP>:9100 <username>@<ssh_gateway> Basically i just mapped the port 19100 of my laptop directly to the input port of the printer, passing through the gateway. So far, so good. Then, i tried to install on my laptop a new printer with the GUI config tool of ubuntu, so that the new printer is on localhost at port 19100 (as APP Socket/HP Jet Direct) , then I provided the proper driver of the printer. In theory, once the tunnel is open I should be able to print from any program just selecting this printer. Of course, it does not work. :-) The document hangs in the queue with status Processing while in the shell where I set up the tunnel I get these errors on failing opening channels debug1: Local forwarding listening on ::1 port 19100. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 19100. debug1: channel 1: new [port listener] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 3: new [direct-tcpip] channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44434, nchannels 4 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] channel 3: open failed: connect failed: Connection timed out debug1: channel 3: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44443, nchannels 4 channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44493, nchannels 3 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] As a further debugging test I tried the following. From a machine inside the local network I did a telnet <IP_printer> 9100, got access, wrote some random thing, closed the connection and correctly I got a print of what I had written. So the port and the ip of the printer should be correct. I tried the same from my laptop with the tunnel opened, the telnet succeeded but, again, the printer didn't print anything, getting the usual channel x: open failed: errors. I'm not a great expert on the matter, I just thought that in theory it was possible to do something like that, but maybe there is something that I didn't consider or I did wrong. Any clue? Thanks! Simone [update] As further debugging test, I tried to replicate the procedure from a machine in the local network. From that machine, I did ssh -N -L19100:<IP_printer>:9100 <username>@<ssh_gateway> (note that now the machine, the gateway and the printer are in the same local network) then I tried again the telnet test with telnet localhost 19100, I got access and everything, but I didn't get the print but the usual error channel 2: open failed: connect failed: Connection timed out Maybe I am missing some other connection to be forwarded or maybe this is not allowed by the administrators. Of course, if I connect via ssh tunneling to the local machine from my laptop through the gateway, I can successfully print using the lpr command (from the local machine). But this is what I would like to avoid (yes, I'm lazy...:-), I would like to have a more 'elegant' and transparent way to do that.

    Read the article

  • Cannot update Eclipse due to conflicting dependencies

    - by kemra102
    I installed Eclipse via the Ubuntu repos (I'm on Ubuntu 11.10). Then I added the Indigo repo (http://download.eclipse.org/releases/indigo/) as only Helios repos were listed as part of the default install. If I go to HelpCheck for Updates then a number of updates are listed for install, however when I click Next I get the following error: Cannot complete the install because of a conflicting dependency. Software being installed: Eclipse Java Development Tools 3.7.1.r371_v20110810-0800-7z8gFcoFMLfTabvKsR5Qm9rBGEBK (org.eclipse.jdt.feature.group 3.7.1.r371_v20110810-0800-7z8gFcoFMLfTabvKsR5Qm9rBGEBK) Software currently installed: Shared profile 1.0.0.1317160468326 (SharedProfile_PlatformProfile 1.0.0.1317160468326) Only one of the following can be installed at once: JSch UI 1.1.300.dist (org.eclipse.jsch.ui 1.1.300.dist) JSch UI 1.1.300.I20110511-0800 (org.eclipse.jsch.ui 1.1.300.I20110511-0800) Cannot satisfy dependency: From: Shared profile 1.0.0.1317160468326 (SharedProfile_PlatformProfile 1.0.0.1317160468326) To: org.eclipse.jsch.ui [1.1.300.dist] Cannot satisfy dependency: From: Eclipse Java Development Tools 3.7.1.r371_v20110810-0800-7z8gFcoFMLfTabvKsR5Qm9rBGEBK (org.eclipse.jdt.feature.group 3.7.1.r371_v20110810-0800-7z8gFcoFMLfTabvKsR5Qm9rBGEBK) To: org.eclipse.platform.feature.group 3.7.1 Cannot satisfy dependency: From: Eclipse Platform 3.7.1.r37x_v20110729-9gF7UHOxFtniV7mI3T556iZN9AU8bEZ1lHMcVK (org.eclipse.platform.feature.group 3.7.1.r37x_v20110729-9gF7UHOxFtniV7mI3T556iZN9AU8bEZ1lHMcVK) To: org.eclipse.jsch.ui [1.1.300.I20110511-0800] I have tried fully removing eclipse and all config files and re-installing but that doesn't help. I can't find any info from Googling around either.

    Read the article

  • GlassFish Server 3.1.2.2 Maven Coordinates

    - by arungupta
    GlassFish Server 3.1.2.2 was released a few weeks ago. This micro release fixed a couple of important bugfixes - one in JAX-WS (JAX-WS-1059) and another one related to JK listener with Apache + mod_ajp_proxy (GLASSFISH-18446). This release is already integrated in NetBeans 7.2 and you can download separately from here. Maven coordinates for this build are now also described on the download page. The following fragment in your pom.xml will allow you to invoke embedded-glassfish:run, embedded-glassfish:deploy, and other similar commands. <dependency>    <groupId>org.glassfish.embedded</groupId>     <artifactId>maven-embedded-glassfish-plugin</artifactId>     <version>3.1.2.2</version> </dependency> GlassFish Embedded Server Guide provide more details about setup etc. Similarly full platform or Web profile implementation of GlassFish can be included as a single JAR using <dependency>    <groupId>org.glassfish.main.extras</groupId>    <artifactId>glassfish-embedded-web</artifactId>    <version>3.1.2.2</version></dependency> Of course, you need to replace "glassfish-embedded-web" with "glassfish-embedded-all" to get complete platform.  The download page provide more details different bundles and complete maven coordinates. Or you can get started with the simple zip bundle as well.

    Read the article

  • libc-bin errors when trying to install php

    - by jonney
    i am trying to update and install php into my ubuntu server 12.04 using the command below: apt-get upgrade php apt-get install php5-curl php5-gd php5-mysql php5-pgsql However i receive this error all the time: gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-34-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-34-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-34-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-server: linux-image-server depends on linux-image-3.2.0-33-generic; however: Package linux-image-3.2.0-33-generic is not configured yet. dpkg: error processing linux-image-server (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-server: linux-server depends on linux-image-server (= 3.2.0.33.36); however: Package linux-image-server is not configured yet. dpkg: error processing linux-server (--configure): dependency problems - leaving unconfigured Setting up libpq5 (9.1.10-0ubuntu12.04) ... No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because MaxReports has already been reached Setting up php5-curl (5.3.10-1ubuntu3.8) ... Setting up php5-pgsql (5.3.10-1ubuntu3.8) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-32-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-32-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports has already been reached Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: linux-image-3.2.0-33-generic linux-image-3.2.0-34-generic linux-image-server linux-server initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Not sure whats wrong and why it cant process the linux-image files?

    Read the article

  • How to resolve "dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb"?

    - by raz7588
    Update Manager will not update although I have over 100 updates to do I get a error message like this: installArchives() failed: Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 189751 files and directories currently installed.) Preparing to replace python-problem-report 2.0.1-0ubuntu7 (using .../python-problem-report_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-apport 2.0.1-0ubuntu7 (using .../python-apport_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace apport 2.0.1-0ubuntu7 (using .../apport_2.0.1-0ubuntu9_all.deb) ... apport stop/waiting Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already apport start/running Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gnome-orca 3.4.1-0ubuntu0.1 (using .../gnome-orca_3.4.2-0ubuntu0.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-piston-mini-client 0.7.2-0ubuntu1 (using .../python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace oneconf 0.2.8 (using .../oneconf_0.2.8.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/oneconf_0.2.8.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2 (using .../software-center_5.2.2.2_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/software-center_5.2.2.2_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace libglade2-0 1:2.6.4-1ubuntu1 (using .../libglade2-0_1%%3a2.6.4-1ubuntu1.1_amd64.deb) ... Unpacking replacement libglade2-0 ... Preparing to replace libv4l-0 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_amd64.deb) ... De-configuring libv4l-0:i386 ... Unpacking replacement libv4l-0 ... Preparing to replace libv4l-0:i386 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_i386.deb) ... Unpacking replacement libv4l-0:i386 ... Preparing to replace libv4lconvert0:i386 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_i386.deb) ... De-configuring libv4lconvert0 ... Unpacking replacement libv4lconvert0:i386 ... Preparing to replace libv4lconvert0 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_amd64.deb) ... Unpacking replacement libv4lconvert0 ... Errors were encountered while processing: /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb /var/cache/apt/archives/oneconf_0.2.8.1_all.deb /var/cache/apt/archives/software-center_5.2.2.2_all.deb Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up libglade2-0 (1:2.6.4-1ubuntu1.1) ... dpkg: error processing gnome-orca (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: error processing python-problem-report (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4lconvert0 (0.8.6-1ubuntu2) ... Setting up libv4lconvert0:i386 (0.8.6-1ubuntu2) ... dpkg: error processing python-piston-mini-client (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4l-0 (0.8.6-1ubuntu2) ... Setting up libv4l-0:i386 (0.8.6-1ubuntu2) ... dpkg: dependency problems prevent configuration of python-apport: python-apport depends on python-problem-report (>= 0.94); however: Package python-problem-report is not configured yet. dpkg: error processing python-apport (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of software-center: software-center depends on python-piston-mini-client (>= 0.1+bzr29); however: Package python-piston-mini-client is not configured yet. dpkg: error processing software-center (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of oneconf: oneconf depends on python-piston-mini-client (>= 0.3+bzr32-0ubuntu1); however: Package python-piston-mini-client is not configured yet. dpkg: error processing oneconf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of apport: apport depends on python-apport (>= 2.0.1-0ubuntu7); however: Package python-apport is not configured yet. dpkg: error processing apport (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place This has been going on for two weeks now and I cannot get any updates. Any help would be great.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Errors when installing updates

    - by user71613
    I am getting the following errors when installing updates. They started to appear after I upgraded my system to 12.04. Errors were encountered while processing: samba-common samba-common-bin samba grub-pc grub-gfxpayload-lists Setting up samba-common (2:3.6.3-2ubuntu2.2) ... perl: error while loading shared libraries: libperl.so.5.12: cannot open shared object file: No such file or directory dpkg: error processing samba-common (--configure): subprocess installed post-installation script returned error exit status 127 dpkg: dependency problems prevent configuration of samba-common-bin: samba-common-bin depends on samba-common (>= 2:3.4.0~pre1-2); however: Package samba-common is not configured yet. dpkg: error processing samba-common-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of samba: samba depends on samba-common (= 2:3.6.3-2ubuntu2.2); however: Package samba-common is not configured yet. samba depends on samba-common-bin; however: Package samba-common-bin is not configured yet. dpkg: error processing samba (--configure): dependency problems - leaving unconfigured Setting up grub-gfxpayload-lists (0.6) ... Setting up grub-pc (1.99-21ubuntu3.1) ... perl: error while loading shared libraries: libperl.so.5.12: cannot open shared object file: No such file or directory dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 127 Any ideas how to fix this?

    Read the article

  • I cannot install wine 1.5 on ubuntu 12.04 x64

    - by user26286
    I tried: $ sudo add-apt-repository ppa:ubuntu-wine/ppa $ sudo apt-get update $ sudo apt-get install wine1.5 result was: $ sudo apt-get install wine1.5 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.6-0ubuntu1~pulse17) E: Unable to correct problems, you have held broken packages. than I tried: $ sudo dpkg --configure -a $ sudo apt-get install -f result was: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. after all I tried: $ sudo apt-get build-dep wine1.5 Reading package lists... Done Building dependency tree Reading state information... Done The following packages have unmet dependencies: libgnutls-dev : Depends: libgnutls26 (= 2.12.14-5ubuntu3) but 2.12.19-1 is to be installed Depends: libgnutlsxx27 (= 2.12.14-5ubuntu3) but it is not going to be installed Depends: libgnutls-openssl27 (= 2.12.14-5ubuntu3) but it is not going to be installed E: Build-dependencies for wine1.5 could not be satisfied. also tried: $ sudo apt-get install -y ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. Somebody can help me? Thanks.

    Read the article

  • JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes

    - by John-Brown.Evans
    JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c17_6{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c5_6{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c6_6{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c15_6{background-color:#ffffff} .c10_6{color:#1155cc;text-decoration:underline} .c1_6{text-align:center;direction:ltr} .c0_6{line-height:1.0;direction:ltr} .c16_6{color:#666666;font-size:12pt} .c18_6{color:inherit;text-decoration:inherit} .c8_6{background-color:#f3f3f3} .c2_6{direction:ltr} .c14_6{font-size:8pt} .c11_6{font-size:10pt} .c7_6{font-weight:bold} .c12_6{height:0pt} .c3_6{height:11pt} .c13_6{border-collapse:collapse} .c4_6{font-family:"Courier New"} .c9_6{font-style:italic} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue This example leads you through the creation of an Oracle database Advanced Queue and the related WebLogic server objects in order to use AQ JMS in connection with a SOA composite. If you have not already done so, I recommend you look at the previous posts in this series, as they include steps which this example builds upon. The following examples will demonstrate how to write and read from the queue from a SOA process. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we wrote and deployed BPEL composites, which enqueued and dequeued a simple XML payload. AQ JMS allows you to interoperate with database Advanced Queueing via JMS in WebLogic server and therefore take advantage of database features, while maintaining compliance with the JMS architecture. AQ JMS uses the WebLogic JMS Foreign Server framework. A full description of this functionality can be found in the following Oracle documentation Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 7. Interoperating with Oracle AQ JMS http://docs.oracle.com/cd/E23943_01/web.1111/e13738/aq_jms.htm#CJACBCEJ For easier reference, this sample will use the same names for the objects as in the above document, except for the name of the database user, as it is possible that this user already exists in your database. We will create the following objects Database Objects Name Type AQJMSUSER Database User MyQueueTable Advanced Queue (AQ) Table UserQueue Advanced Queue WebLogic Server Objects Object Name Type JNDI Name aqjmsuserDataSource Data Source jdbc/aqjmsuserDataSource AqJmsModule JMS System Module AqJmsForeignServer JMS Foreign Server AqJmsForeignServerConnectionFactory JMS Foreign Server Connection Factory AqJmsForeignServerConnectionFactory AqJmsForeignDestination AQ JMS Foreign Destination queue/USERQUEUE eis/aqjms/UserQueue Connection Pool eis/aqjms/UserQueue 2. Create a Database User and Advanced Queue The following steps can be executed in the database client of your choice, e.g. JDeveloper or SQL Developer. The examples below use SQL*Plus. Log in to the database as a DBA user, for example SYSTEM or SYS. Create the AQJMSUSER user and grant privileges to enable the user to create AQ objects. Create Database User and Grant AQ Privileges sqlplus system/password as SYSDBA GRANT connect, resource TO aqjmsuser IDENTIFIED BY aqjmsuser; GRANT aq_user_role TO aqjmsuser; GRANT execute ON sys.dbms_aqadm TO aqjmsuser; GRANT execute ON sys.dbms_aq TO aqjmsuser; GRANT execute ON sys.dbms_aqin TO aqjmsuser; GRANT execute ON sys.dbms_aqjms TO aqjmsuser; Create the Queue Table and Advanced Queue and Start the AQ The following commands are executed as the aqjmsuser database user. Create the Queue Table connect aqjmsuser/aqjmsuser; BEGIN dbms_aqadm.create_queue_table ( queue_table = 'myQueueTable', queue_payload_type = 'sys.aq$_jms_text_message', multiple_consumers = false ); END; / Create the AQ BEGIN dbms_aqadm.create_queue ( queue_name = 'userQueue', queue_table = 'myQueueTable' ); END; / Start the AQ BEGIN dbms_aqadm.start_queue ( queue_name = 'userQueue'); END; / The above commands can be executed in a single PL/SQL block, but are shown as separate blocks in this example for ease of reference. You can verify the queue by executing the SQL command SELECT object_name, object_type FROM user_objects; which should display the following objects: OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------- SYS_C0056513 INDEX SYS_LOB0000170822C00041$$ LOB SYS_LOB0000170822C00040$$ LOB SYS_LOB0000170822C00037$$ LOB AQ$_MYQUEUETABLE_T INDEX AQ$_MYQUEUETABLE_I INDEX AQ$_MYQUEUETABLE_E QUEUE AQ$_MYQUEUETABLE_F VIEW AQ$MYQUEUETABLE VIEW MYQUEUETABLE TABLE USERQUEUE QUEUE Similarly, you can view the objects in JDeveloper via a Database Connection to the AQJMSUSER. 3. Configure WebLogic Server and Add JMS Objects All these steps are executed from the WebLogic Server Administration Console. Log in as the webLogic user. Configure a WebLogic Data Source The data source is required for the database connection to the AQ created above. Navigate to domain > Services > Data Sources and press New then Generic Data Source. Use the values:Name: aqjmsuserDataSource JNDI Name: jdbc/aqjmsuserDataSource Database type: Oracle Database Driver: *Oracle’ Driver (Thin XA) for Instance connections; Versions:9.0.1 and later Connection Properties: Enter the connection information to the database containing the AQ created above and enter aqjmsuser for the User Name and Password. Press Test Configuration to verify the connection details and press Next. Target the data source to the soa server. The data source will be displayed in the list. It is a good idea to test the data source at this stage. Click on aqjmsuserDataSource, select Monitoring > Testing > soa_server1 and press Test Data Source. The result is displayed at the top of the page. Configure a JMS System Module The JMS system module is required to host the JMS foreign server for AQ resources. Navigate to Services > Messaging > JMS Modules and select New. Use the values: Name: AqJmsModule (Leave Descriptor File Name and Location in Domain empty.) Target: soa_server1 Click Finish. The other resources will be created in separate steps. The module will be displayed in the list.   Configure a JMS Foreign Server A foreign server is required in order to reference a 3rd-party JMS provider, in this case the database AQ, within a local WebLogic server JNDI tree. Navigate to Services > Messaging > JMS Modules and select (click on) AqJmsModule to configure it. Under Summary of Resources, select New then Foreign Server. Name: AqJmsForeignServer Targets: The foreign server is targeted automatically to soa_server1, based on the JMS module’s target. Press Finish to create the foreign server. The foreign server resource will be listed in the Summary of Resources for the AqJmsModule, but needs additional configuration steps. Click on AqJmsForeignServer and select Configuration > General to complete the configuration: JNDI Initial Context Factory: oracle.jms.AQjmsInitialContextFactory JNDI Connection URL: <empty> JNDI Properties Credential:<empty> Confirm JNDI Properties Credential: <empty> JNDI Properties: datasource=jdbc/aqjmsuserDataSource This is an important property. It is the JNDI name of the data source created above, which points to the AQ schema in the database and must be entered as a name=value pair, as in this example, e.g. datasource=jdbc/aqjmsuserDataSource, including the “datasource=” property name. Default Targeting Enabled: Leave this value checked. Press Save to save the configuration. At this point it is a good idea to verify that the data source was written correctly to the config file. In a terminal window, navigate to $MIDDLEWARE_HOME/user_projects/domains/soa_domain/config/jms  and open the file aqjmsmodule-jms.xml . The foreign server configuration should contain the datasource name-value pair, as follows:   <foreign-server name="AqJmsForeignServer">         <default-targeting-enabled>true</default-targeting-enabled>         <initial-context-factory>oracle.jms.AQjmsInitialContextFactory</initial-context-factory>         <jndi-property>           <key> datasource </key>           <value> jdbc/aqjmsuserDataSource </value>         </jndi-property>   </foreign-server> </weblogic-jms> Configure a JMS Foreign Server Connection Factory When creating the foreign server connection factory, you enter local and remote JNDI names. The name of the connection factory itself and the local JNDI name are arbitrary, but the remote JNDI name must match a specific format, depending on the type of queue or topic to be accessed in the database. This is very important and if the incorrect value is used, the connection to the queue will not be established and the error messages you get will not immediately reflect the cause of the error. The formats required (Remote JNDI names for AQ JMS Connection Factories) are described in the section Configure AQ Destinations  of the Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server document mentioned earlier. In this example, the remote JNDI name used is   XAQueueConnectionFactory  because it matches the AQ and data source created earlier, i.e. thin with AQ. Navigate to JMS Modules > AqJmsModule > AqJmsForeignServer > Connection Factories then New.Name: AqJmsForeignServerConnectionFactory Local JNDI Name: AqJmsForeignServerConnectionFactory Note: this local JNDI name is the JNDI name which your client application, e.g. a later BPEL process, will use to access this connection factory. Remote JNDI Name: XAQueueConnectionFactory Press OK to save the configuration. Configure an AQ JMS Foreign Server Destination A foreign server destination maps the JNDI name on the foreign JNDI provider to the respective local JNDI name, allowing the foreign JNDI name to be accessed via the local server. As with the foreign server connection factory, the local JNDI name is arbitrary (but must be unique), but the remote JNDI name must conform to a specific format defined in the section Configure AQ Destinations  of the Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server document mentioned earlier. In our example, the remote JNDI name is Queues/USERQUEUE , because it references a queue (as opposed to a topic) with the name USERQUEUE. We will name the local JNDI name queue/USERQUEUE, which is a little confusing (note the missing “s” in “queue), but conforms better to the JNDI nomenclature in our SOA server and also allows us to differentiate between the local and remote names for demonstration purposes. Navigate to JMS Modules > AqJmsModule > AqJmsForeignServer > Destinations and select New.Name: AqJmsForeignDestination Local JNDI Name: queue/USERQUEUE Remote JNDI Name:Queues/USERQUEUE After saving the foreign destination configuration, this completes the JMS part of the configuration. We still need to configure the JMS adapter in order to be able to access the queue from a BPEL processt. 4. Create a JMS Adapter Connection Pool in Weblogic Server Create the Connection Pool Access to the AQ JMS queue from a BPEL or other SOA process in our example is done via a JMS adapter. To enable this, the JmsAdapter in WebLogic server needs to be configured to have a connection pool which points to the local connection factory JNDI name which was created earlier. Navigate to Deployments > Next and select (click on) the JmsAdapter. Select Configuration > Outbound Connection Pools and New. Check the radio button for oracle.tip.adapter.jms.IJmsConnectionFactory and press Next. JNDI Name: eis/aqjms/UserQueue Press Finish Expand oracle.tip.adapter.jms.IJmsConnectionFactory and click on eis/aqjms/UserQueue to configure it. The ConnectionFactoryLocation must point to the foreign server’s local connection factory name created earlier. In our example, this is AqJmsForeignServerConnectionFactory . As a reminder, this connection factory is located under JMS Modules > AqJmsModule > AqJmsForeignServer > Connection Factories and the value needed here is under Local JNDI Name. Enter AqJmsForeignServerConnectionFactory  into the Property Value field for ConnectionFactoryLocation. You must then press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console.Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes. Redeploy the JmsAdapter Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button. On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the AQ JMS queue. You can verify that the JNDI name was created correctly, by navigating to Environment > Servers > soa_server1 and View JNDI Tree. Then scroll down in the JNDI Tree Structure to eis and select aqjms. This concludes the sample. In the following post, I will show you how to create a BPEL process which sends a message to this advanced queue via JMS. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • jqGrid > datatype : "local" > get and save (at once) all edited cells of a column ?

    - by Qualliarys
    Hi, i have a grid with a datatype = "local". The data are an array as follows : var mydata = [{id:1,valeur:"a_value",designation:"a_designation"}, {id:2,...}, ...]; The second column (named valeur) is the only editable column of the grid (editable:true set in colModel) In the pager of the grid, i have 2 buttons. One to edit all cells (at once) of the column named valeur: $("#mygrid").jqGrid('navButtonAdd','#pager',{caption:"Edit values", onClickButton:function(){ var ids = $('#mygrid').jqGrid('getDataIDs'); for(var i=0;i<ids.length+1;i++){ $('#mygrid').jqGrid('editRow',ids[i],true);} }}); and another one to save (at once) all the changes of the edited cells: $("#mygrid").jqGrid('navButtonAdd','#pager',{caption:"Save changes", onClickButton:function(){var ids = $('#mygrid').jqGrid('getDataIDs'); for(var i=0;i<ids.length+1;i++){ ... ??? ... }}}); when i use : var rd = $("#mygrid").jqGrid('getRowData',ids[i]); alert("valeur="+rd.valeur); for each display i get something like that: valeur=< input class="editable" role="textbox" name="valeur" id="1_valeur" style="width: 98%;" type="text" ... So, when cells are in edition mode, all rd.valeur are an input text tag ! when they are not, i get the initials values of the cells ! How to get and save all changes of this column (all cells in edition mode)?

    Read the article

  • python-social-auth AuthCanceled exception

    - by vero4ka
    I'm using python-social-auth in my Django application for authentication via Facebook. But when a user tries to login and when it's been refirected to Facebook app page clicks on "Cancel" button, appears the following exception: ERROR 2014-01-03 15:32:15,308 base :: Internal Server Error: /complete/facebook/ Traceback (most recent call last): File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/utils.py", line 45, in wrapper return func(request, backend, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/views.py", line 21, in complete redirect_name=REDIRECT_FIELD_NAME, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/actions.py", line 54, in do_complete *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/strategies/base.py", line 62, in complete return self.backend.auth_complete(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 63, in auth_complete self.process_error(self.data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 56, in process_error super(FacebookOAuth2, self).process_error(data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/oauth.py", line 312, in process_error raise AuthCanceled(self, data.get('error_description', '')) AuthCanceled: Authentication process canceled Is the any way to catch it Django?

    Read the article

  • How to create static method that evaluates local static variable once?

    - by Viet
    I have a class with static method which has a local static variable. I want that variable to be computed/evaluated once (the 1st time I call the function) and for any subsequent invocation, it is not evaluated anymore. How to do that? Here's my class: template< typename T1 = int, unsigned N1 = 1, typename T2 = int, unsigned N2 = 0, typename T3 = int, unsigned N3 = 0, typename T4 = int, unsigned N4 = 0, typename T5 = int, unsigned N5 = 0, typename T6 = int, unsigned N6 = 0, typename T7 = int, unsigned N7 = 0, typename T8 = int, unsigned N8 = 0, typename T9 = int, unsigned N9 = 0, typename T10 = int, unsigned N10 = 0, typename T11 = int, unsigned N11 = 0, typename T12 = int, unsigned N12 = 0, typename T13 = int, unsigned N13 = 0, typename T14 = int, unsigned N14 = 0, typename T15 = int, unsigned N15 = 0, typename T16 = int, unsigned N16 = 0> struct GroupAlloc { static const uint32_t sizeClass; static uint32_t getSize() { static uint32_t totalSize = 0; totalSize += sizeof(T1)*N1; totalSize += sizeof(T2)*N2; totalSize += sizeof(T3)*N3; totalSize += sizeof(T4)*N4; totalSize += sizeof(T5)*N5; totalSize += sizeof(T6)*N6; totalSize += sizeof(T7)*N7; totalSize += sizeof(T8)*N8; totalSize += sizeof(T9)*N9; totalSize += sizeof(T10)*N10; totalSize += sizeof(T11)*N11; totalSize += sizeof(T12)*N12; totalSize += sizeof(T13)*N13; totalSize += sizeof(T14)*N14; totalSize += sizeof(T15)*N15; totalSize += sizeof(T16)*N16; totalSize = 8*((totalSize + 7)/8); return totalSize; } };

    Read the article

  • How can I run a local Windows Application and have the output be piped into the Browser.

    - by Trey Sherrill
    I have Windows Application (.EXE file is written in C and built with MS-Visual Studio), that outputs ASCII text to stdout. I’m looking to enhance the ASCII text to include limited HTML with a few links. I’d like to invoke this application (.EXE File) and take the output of that application and pipe it into a Browser. This is not a one time thing, each new web page would be another run of the Local Application! The HTML/java-script application below has worked for me to execute the application, but the output has gone into a DOS Box windows and not to pipe it into the Browser. I’d like to update this HTML Application to enable the Browser to capture that text (that is enhanced with HTML) and display it with the browser. <body> <script> function go() { w = new ActiveXObject("WScript.Shell"); w.run('C:/DL/Browser/mk_html.exe'); return true; } </script> <form> Run My Application (Window with explorer only) <input type="button" value="Go" onClick="return go()"> </FORM> </body>

    Read the article

  • What do these errors mean? ISOC++ forbids assignment of arrays...

    - by xunlinkx
    I'm trying to compile some code on one of our systems for our DBA...I've edited the makefiles to include the pertinent libraries listed in the documentation, but I keep getting these errors... Can you discern any obvious problems from my command lines in reference to the errors listed? Thank you! make -f /u01/app/banner/ban8/TEST3/links/Makefile_tm_linux64_redhat5_ban8.mk gcc -m64 -D_NOFIXARGPTR -fpic -shared -DTMCILIB_EXPORTS -D_TMUNICODE -I/usr/local/ban_icu -I/usr/local/src/icu/source/i18n/ -I/usr/local/src/icu/source/common/ -I/usr/local/src/icu/source/extra/ustdio/ -I/usr/local/src/icu/source/io -L/usr/lib64 -L/usr/lib -L/usr/local/src/icu/source/data/ -L/usr/local/src/icu/source/data/out/ -L/usr/local/src/icu/source/tools/toolutil/ -L/usr/lib/im/icuconv/ -L/usr/local/lib/ -L. -licui18n -licudata -licuuc -licu-toolutil -licuio msgfmttm.cpp umsgtm.cpp tmcilib.cpp -o /u01/app/banner/ban8/TEST3/general/exe/libtmciuc.so umsgtm.cpp: In function ‘void fixArgPtr(const UChar*, __va_list_tag (*)[1])’: umsgtm.cpp:158: error: array must be initialized with a brace-enclosed initializer umsgtm.cpp:194: error: ISO C++ forbids assignment of arrays umsgtm.cpp: In function ‘int32_t tmumsg_vformat(void*, UChar, int32_t, __va_list_tag*, UErrorCode*)’: umsgtm.cpp:305: error: cannot convert ‘__va_list_tag**’ to ‘__va_list_tag ()[1]’ for argument ‘2’ to ‘void fixArgPtr(const UChar, __va_list_tag (*)[1])’ tmcilib.cpp: In function ‘int tmprintf(TMBundle*, const UChar*, ...)’: tmcilib.cpp:743: error: array must be initialized with a brace-enclosed initializer tmcilib.cpp: In function ‘int tmfprintf(TMBundle*, UFILE*, const UChar*, ...)’: tmcilib.cpp:757: error: array must be initialized with a brace-enclosed initializer tmcilib.cpp: In function ‘int tmsprintf(TMBundle*, UChar*, const UChar*, ...)’: tmcilib.cpp:808: error: array must be initialized with a brace-enclosed initializer

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >