Search Results

Search found 1159 results on 47 pages for 'libs'.

Page 24/47 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Managing JS and CSS for a static HTML web application

    - by Josh Kelley
    I'm working on a smallish web application that uses a little bit of static HTML and relies on JavaScript to load the application data as JSON and dynamically create the web page elements from that. First question: Is this a fundamentally bad idea? I'm unclear on how many web sites and web applications completely dispense with server-side generation of HTML. (There are obvious disadvantages of JS-only web apps in the areas of graceful degradation / progressive enhancement and being search engine friendly, but I don't believe that these are an issue for this particular app.) Second question: What's the best way to manage the static HTML, JS, and CSS? For my "development build," I'd like non-minified third-party code, multiple JS and CSS files for easier organization, etc. For the "release build," everything should be minified, concatenated together, etc. If I was doing server-side generation of HTML, it'd be easy to have my web framework generate different development versus release HTML that includes multiple verbose versus concatenated minified code. But given that I'm only doing any static HTML, what's the best way to manage this? (I realize I could hack something together with ERB or Perl, but I'm wondering if there are any standard solutions.) In particular, since I'm not doing any server-side HTML generation, is there an easy, semi-standard way of setting up my static HTML so that it contains code like <script src="js/vendors/jquery.js"></script> <script src="js/class_a.js"></script> <script src="js/class_b.js"></script> <script src="js/main.js"></script> at development time and <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src="js/entire_app.min.js"></script> for release?

    Read the article

  • Suggestions for a CMS markup language for PHP

    - by Yanick Rochon
    As a learning experience, and as project, I am attempting to write a CMS module for ZF2. One of the functionality I would like to have is the possibility of adding dynamic contents in the pages by calling PHP functions in the view scripts. However, I do not want to give the users freedom in writing PHP code directly inside the page content, but rather implement custom view helpers (or widgets) to handle logic. For example: calling partial, partialLoop, url, etc. specifying arguments and all. I liked the idea of extending Markdown but this would get complicated when trying to add custom CSS class to elements, etc. Then I had the idea of simply doing a preg_replace on some patterns. For example, the string : ### partialLoop:['partials/display.phtml',[{id:'p1',price:4.99},{id:'p2',price:12.34}]] ### would be replaced by <?php echo $this->partialLoop('partials/display.phtml', array(array('id'=>'p1','price'=>4.99),array('id'=>'p2','price'=>12.34))) ?> Obviously, there would be some caching done so the page content is not rendered everytime. Does this sound good? If not, what would be a good way of doing this? Or is there a project already being developed for doing this? (I'd like to avoid heavy third party libs and something fairly or fully compatible with ZF2 would be nice.) Thanks.

    Read the article

  • Compiling NETGEAR WNR2000v3 firmware under 12.04.1 LTS

    - by Madmanguruman
    I'm trying to compile the GPL NETGEAR firmware for the WNR2000v3 router using Ubuntu 12.04.1 LTS. I've downloaded and extracted the source, installed everything I think I need, yet there are two build dependencies that I cannot resolve: > make menuconfig .... Build dependency: Please install ncurses. (Missing libncurses.so or ncurses.h) Build dependency: Please install zlib. (Missing libz.so or zlib.h) I've tried apt-getting all of the following packages: libncurses5 libncurses5:i386 libncurses5-dev zlib1g zlib1g-dev zlib1g-dbg The missing libs seem 'real': the target .so files aren't in /usr/lib, although the .h files are in /usr/include. EDIT: some Googling told me of broken or missing symlinks. I found the following: /lib/i386-linux-gnu/libncurses.so.5 -> libncurses.so.5.9 /lib/i386-linux-gnu/libncurses.so.5.9 /lib/i386-linux-gnu/libncursesw.so.5 -> libncursesw.so.5.9 /lib/i386-linux-gnu/libncursesw.so.5.9 /lib/i386-linux-gnu/libz.so.1 -> libz.so.1.2.3.4 /lib/i386-linux-gnu/libz.so.1.2.3.4 so I tried to symlink these to /usr/lib: /usr/lib/libncurses.so -> /lib/i386-linux-gnu/libncurses.so.5 /usr/lib/libz.so -> /lib/i386-linux-gnu/libz.so.1 but the project still complains about missing libncurses and zlib. EDIT: I was able to get the dependencies to work under 8.04. Need to cross-reference things now. Does anyone have some tips on how to debug this sort of issue?

    Read the article

  • Install proprietary drivers 14.04 NVIDIA (steam segmentation issue)

    - by allthosemiles
    Recently, I finally got the official drivers for my NVIDIA 560 Ti card installed on Ubuntu 14.04 (hooray) However I started looking into installing Steam and I'm getting segmentation errors when I try to run the software. I tried installing 32-bit libs and it seemed like they weren't available or they were already installed. Upon further investigation, I found that a solution is to install the proprietary drivers, install steam then switch back to the other drivers. I'm not really sure what "proprietary drivers" are in all honesty. Has anyone gone through this process that could provide some insight here? (I installed the official 64-bit driver from the NVIDIA site for my 560 Ti just for reference. And the Ubuntu version installed is 64-bit as well) Update: This is the error text I get when trying to run steam after installing it via the ubuntu store. Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 3943 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete! Restarting Steam by request... Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME has been set by the user to: /home/dbrewer/.steam/ubuntu12_32/steam-runtime Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 4066 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" What I get when I run "steam --reset" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete!

    Read the article

  • How to compile FFmpeg with x265 support?

    - by Levan
    Today I found out that x265 is already present in ffmpeg so I compiled ffmpeg with this guide Sadly libx265 did not work on ubuntu, however on windows I tried the same thing with zeranoe ffmpeg build and it worked without a problem. So do you think i did something wrong or it is not yet implemented in linux build (using that guide)? The results of the command ffmpeg -codecs | grep -i hevc show: ffmpeg version 2.1.git Copyright (c) 2000-2014 the FFmpeg developers built on Feb 19 2014 19:00:17 with gcc 4.8 (Ubuntu/Linaro 4.8.1-10ubuntu9) configuration: --prefix=/home/levan/ffmpeg_build --extra-cflags=-I/home/levan/ffmpeg_build/include --extra-ldflags=-L/home/levan/ffmpeg_build/lib --bindir=/home/levan/bin --extra-libs=-ldl --enable-gpl --enable-libass --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab libavutil 52. 64.100 / 52. 64.100 libavcodec 55. 52.102 / 55. 52.102 libavformat 55. 33.100 / 55. 33.100 libavdevice 55. 10.100 / 55. 10.100 libavfilter 4. 1.102 / 4. 1.102 libswscale 2. 5.101 / 2. 5.101 libswresample 0. 17.104 / 0. 17.104 libpostproc 52. 3.100 / 52. 3.100 D.V.L. hevc H.265 / HEVC (High Efficiency Video Coding) Thank you for your time

    Read the article

  • correct Installation and configuration of openJDK and R

    - by Marco K
    I am relatively new to Ubuntu so I wont know a lot of commands that probably became standard to a lot of you guys. I am trying to set up R and with it the necessary java dependencies to install e.g. JGR, rjava, etc. I read through quite a few instructions to do that but somehow I must have done sth wrong. Here is the state of R and java: R --version R version 2.14.1 (2011-12-22) Copyright (C) 2011 The R Foundation for Statistical Computing ISBN 3-900051-07-0 Platform: x86_64-pc-linux-gnu (64-bit) java -version java version "1.6.0_23" OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) R CMD javareconf Java interpreter : /usr/bin/java Java version : 1.6.0_23 Java home path : /usr/lib/jvm/java-6-openjdk/jre Java compiler : /usr/bin/javac Java headers gen.: /usr/bin/javah Java archive tool: /usr/bin/jar Java library path: /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib/jni:/lib:/usr/lib JNI linker flags : -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64 -L/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 -L/usr/java/packages/lib/amd64 -L/usr/lib/jni -L/lib -L/usr/lib -ljvm JNI cpp flags : But when I try to install 'JavaGD' in R, which is a dependency for JGR I get: ... checking Java support in R... present: interpreter : '/usr/bin/java' cpp flags : '' java libs : '-L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64 -L/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 -L/usr/java/packages/lib/amd64 -L/usr/lib/jni -L/lib -L/usr/lib -ljvm' configure: error: One or more Java configuration variables are not set. Make sure R is configured with full Java support (including JDK). Run R CMD javareconf as root to add Java support to R. ... Any help would be greatly appreciated. Thanks!

    Read the article

  • Did 12.04 just add multi-touch gesture support mid-release?

    - by adempewolff
    I was reviewing the updates I was about to download today and I noticed that a lot of them had to do with gesture support, noticed that many of these were new installs rather than upgrades. Has 12.04 just added multi-touch gesture support mid-release? If so, what are the capabilities that this adds? Which applications already support these capabilities and can I expect others to add support in the near future? Here are the packages that were installed: Install: libframe6:amd64 (2.2.4-0ubuntu0.12.04.1), libgeis1:amd64 (2.2.9.2-0ubuntu1), libgrail5:amd64 (3.0.6-0ubuntu0.12.04.01, automatic) And here are those that were upgraded (also including many with touch support): Upgrade: libgrip0:amd64 (0.3.4-0ubuntu2~ubuntu12.04.1, 0.3.5-0ubuntu1~12.04.1), eog:amd64 (3.4.2-0ubuntu1, 3.4.2-0ubuntu1.1), ginn:amd64 (0.2.4-0ubuntu1, 0.2.4.1-0ubuntu1) Of which the descriptions for the new installs are, libgeis1: Gesture engine interface support A common API for clients of a systemwide gesture recognition and propagation engine. libframe6: Touch Frame Library This library handles the buildup and synchronization of a set of simultaneous touches. The library is input agnostic, with bindings for mtdev, frame and XI2.1. libgrail5: Gesture Recognition And Instantiation Library This library consists of an interface and tools for handling gesture recognition and gesture instantiation. Applications can use the grail callbacks to receive gesture primitives and raw input events from the underlying kernel device. And the descriptions for the upgraded packages are, ligrip0: provides multitouch gestures to GTK+ apps Libgrip hooks gesture recognition into GTK+ applications. ginn: Gesture Injector: No-GEIS, No-Toolkits A daemon with jinn-like wish-granting capabilities: it gives applications the ability to support a subset of multi-touch gestures without having to integrate GEIS or multi-touch GTK/Qt libs. Adding in a ton of new libraries and upgrading the existing components makes me wonder if 12.04 is meant to start natively supporting gestures other than two finger scroll in the near future. I expected these capabilities to be introduced soon but I thought that they would only be rolled out in a new release, not as upgrades for an existing release. Anyone have any info about this?

    Read the article

  • Differentiating between Hard and Soft Dependencies - Fedora Yum [closed]

    - by Sujit
    I will ask this with an example - I have installed gnash-plugin on fedora 64 bit with Yum. It pulled in following packages - Installing : agg-2.5-9.fc13.x86_64 1/6 Installing : gtkglext-libs-1.2.0-10.fc12.x86_64 2/6 Installing : boost-thread-1.44.0-7.fc14.x86_64 3/6 Installing : boost-date-time-1.44.0-7.fc14.x86_64 4/6 Installing : 1:gnash-0.8.8-4.fc14.x86_64 5/6 Installing : 1:gnash-plugin-0.8.8-4.fc14.x86_64 6/6 Now, I tested the plugin and I didn't like it. I want to remove all these above packages which got installed with the plugin as I don't longer going to need them. How can I do this? I checked remove-with-plugin for yum but it pulls in all the packages which are currently depending on the packages. I understand the thought process behind showing what packages are getting affected - but I am wondering if there is any way of looking at the history with what package got installed when I installed a certain package. When gnash-plugin wasn't there firefox was running fine with but after I installation firefox is now depends on this new plugin. Has any one worked on differentiating hard-dependencies(hard means the program will break if that package is not there) and soft-dependencies ( soft means the program may not get affected fatally) ?

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • Restful Java based web services in json + html5 and javascript no templates (jsp/jsf/freemarker) aka fat/thick client

    - by Ismail Marmoush
    I have this idea of building a website which service JSON data through restful services framework. And will not use any template engines like jsp/jsf/freemarker. Just pure html5 and Javascript libs. What do you think of the pros and cons of such design ? Just for elaboration and brain storming a friend of mine argued with the following concerns: sounds like gwt this way you won't have any control over you service api for example say you wanna charge the user per request how will you handle it? how will you control your design and themes? what about the 1st request the browser make? not easy with this all of the user's requests will come with "Accept" header "application/json" how will you separate browser from abuser? this way all of your public apis will be used by third party apps abusively and you won't be able to lock it since you won't be able to block the normal user browser We won't use compiled html anyway but may be something like freemarker and in that case you won't expose any of your json resources to the unauthorized user but you will expose all the html since any browser can access them all the well known 1st class services do this can you send me links to what you've read? keep in mind the DOM based XSS it will be a nightmare ofc, if what you say is applicable.

    Read the article

  • Arguments for a coding standard?

    - by acidzombie24
    A few friends and i are planning to work on a project together and we want a COMPLETELY DIFFERENT coding standard. We do NOT want to use the coding standard the libraries/language uses. Its our project and we want to mess around. So i came here to ask what you guys think are good standards and arguments for it (or what not to do and arguments against it). The styles i remember most are Upper casing the entire word Camel and Pascal casing Using '_' to separate each word pre or postfixing letters or words (i hate m for member but i think IsCond() is a good func name. SomethingException as a postfix example) Using '_' at the start or end of words Brace placement. On a new or same line? I know of libs that use Pascal casing on all public and protected members. But would you ever get confused if something is a func, var or even property if the lang supports it? What about if you decide a public member to be private (or vice versa) wouldnt that great a lot of fix up work or inconsistencies? Is prefixing C to every class a good idea? I ask what do you think and why?

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • UPDATE MANAGER UNABLE TO UPDATE

    - by muguro
    Requires installation of untrusted packages The action would require the installation of packages from not authenticated sources. i get this error every time i try updating. the system shows that it has 466 updates but fails after clicking update more details have this accountsservice apparmor apport apport-gtk apt apt-transport-https apt-utils aptdaemon aptdaemon-data at-spi2-core bamfdaemon base-files bcmwl-kernel-source bind9-host compiz compiz-core compiz-gnome compiz-plugins-default cron cups cups-bsd cups-client cups-common cups-filters cups-ppdc dbus dbus-x11 dconf-gsettings-backend dconf-service desktop-file-utils dmsetup dnsutils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common firefox firefox-globalmenu firefox-gnome-support firefox-locale-en fontconfig fontconfig-config fonts-liberation fonts-opensymbol foomatic-filters gcalctool gdb ghostscript ghostscript-cups ghostscript-x ginn gir1.2-atspi-2.0 gir1.2-dbusmenu-glib-0.4 gir1.2-dbusmenu-gtk-0.4 gir1.2-gst-plugins-base-0.10 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-gudev-1.0 gir1.2-javascriptcoregtk-3.0 gir1.2-launchpad-integration-3.0 gir1.2-pango-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-ubuntuoneui-3.0 gir1.2-unity-5.0 gir1.2-webkit-3.0 glib-networking glib-networking-common glib-networking-services gnome-accessibility-themes gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-games-data gnome-icon-theme gnome-media gnome-orca gnome-settings-daemon gnome-sudoku gnomine gnupg google-talkplugin gpgv grub-common grub-pc grub-pc-bin grub2-common gstreamer0.10-alsa gstreamer0.10-plugins-base gstreamer0.10-plugins-base-apps gstreamer0.10-x gvfs gvfs-backends gvfs-bin gvfs-common gvfs-daemons gvfs-fuse gvfs-libs gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hdparm hplip hplip-data indicator-sound initscripts isc-dhcp-client isc-dhcp-common jockey-common jockey-gtk krb5-locales landscape-client-ui-install language-pack-en language-pack-en-base language-pack-gnome-en language-pack-gnome-en-base launchpad-integration libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libart-2.0-2 libasound2 libatspi2.0-0 libbamf0 libbamf3-0 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcairo-gobject2 libcairo2

    Read the article

  • Why can't I create direct3d objects?

    - by quakkels
    I've been programming professionally for years using languages like VBScript, JavaScript, and C#. As a hobby, I'm getting into some c/c++ and games programming with DirectX. I am running into an issue where I cannot create direct3d objects. I am using Visual C++ 2010 Express. After I installed vc++2010express I then installed the June 2010 release of DirectX. I am trying to include DirectX via #pragma statements. This is the code I have so far in my winmain.cpp source file: #include <Windows.h> #include <d3d11.h> #include <time.h> #include <iostream> using namespace std; #pragma comment(lib, "d3d11.lib") #pragma comment(lib, "d3dx11.lib") // program settings const string AppTitle = "Direct3D in a Window"; const int ScreenWidth = 1024; const int ScreenHeight = 768; // direct3d objects LPDIRECT3D11 d3d = NULL; // this line is showing an error The type LPDIRECT3D11 is showing an error: Error: Identifier "LPDIRECT3D11" is undefined Am I missing something here to get VC++2010Express to recognize and load the DirectX libs? Thanks for any help.

    Read the article

  • Is there a shell-independent HUD-like menu search tool for Xfce/GNOME/Cinnamon?

    - by Redsandro
    The Ubuntu Heads-Up Display (HUD) - you love it or you hate it. Personally I rather like a classic desktop, so I use Xfce or GNOME-fork Cinnamon, and I'd like to keep those menu's where they are. But the HUD is pretty awesome when your menus are complex and you forgot where an option sits. This makes that search trick very interesting. I know the HUD is Unity specific. I am looking for a HUD-like tool to complement the menu in shells other than Unity. There is Appmenu Runner for KDE that does this. There is also appmenu-qt for KDE. Problem with the above is that it uses KDE libs, and it only works for KDE apps. This is Linux, there aught to be something like this for GNOME/GTK apps, right? Looking for any tool that can search the menus. I already use(d) Synapse, Kupfer and GNOME Do, but those are simply app-launchers (with some tricks). Something like that would suffice if only they included searching the menus for the currently focused application. The HUD allows users to activate menu items by typing part of the name. It uses a fuzzy search algorithm that will highlight partial matches. It can match menu items that are multiple layers deep in an application's menu hierarchy. The feature, which replaces traditional menu accelerators, is activated by pressing the alt key. Similar questions: Is there a way to search a menu bar in Debian? - Unix.StackExchange How can I access menu bar items alike hud (unity)? - Unix.StackExchange HUD in other window managers (especially xmonad) - AskUbuntu

    Read the article

  • Rendering a big game universe - bitmaps or vector graphics?

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Installing Skype on 12.04 64 bit causes errors

    - by Wolfy87
    Hi there I am trying to install Skype through apt-get but I am having some trouble. The Skype package depends on skype-bin which is not found in my list of packages. So when trying to install Skype I get the following error. $ sudo apt-get install skype Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: skype : Depends: skype-bin but it is not installable E: Unable to correct problems, you have held broken packages. Does anyone know why this might happen? Am I missing a repository? I get similar results when downloading the .deb from their site. But it complains about ai32-libs not being installable. This is because it depends on another package that does not exist in my list. Please bear in mind that this is a custom install from the company I work for. They have secured it and I think they have updated it over time and skipped versions, possibly breaking things.

    Read the article

  • Does unit testing lead to premature generalization (specifically in the context of C++)?

    - by Martin
    Preliminary notes I'll not go into the distinction of the different kinds of test there are, there are already a few questions on these sites regarding that. I'll take what's there and that says: unit testing in the sense of "testing the smallest isolatable unit of an application" from which this question actually derives The isolation problem What is the smallest isolatable unit of a program. Well, as I see it, it (highly?) depends on what language you are coding in. Micheal Feathers talks about the concept of a seam: [WEwLC, p31] A seam is a place where you can alter behavior in your program without editing in that place. And without going into the details, I understand a seam -- in the context of unit testing -- to be a place in a program where your "test" can interface with your "unit". Examples Unit test -- especially in C++ -- require from the code under test to add more seams that would be strictly called for for a given problem. Example: Adding a virtual interface where non-virtual implementation would have been sufficient Splitting -- generalizing(?) -- a (smallish) class further "just" to facilitate adding a test. Splitting a single-executable project into seemingly "independent" libs, "just" to facilitate compiling them independently for the tests. The question I'll try a few versions that hopefully ask about the same point: Is the way that Unit Tests require one to structure an application's code "only" beneficial for the unit tests or is it actually beneficial to the applications structure. Is the generalization code need to exhibit to be unit-testable useful for anything but the unit tests? Does adding unit tests force one to generalize unnecessarily? Is the shape unit tests force on code "always" also a good shape for the code in general as seen from the problem domain? I remember a rule of thumb that said don't generalize until you need to / until there's a second place that uses the code. With Unit Tests, there's always a second place that uses the code -- namely the unit test. So is this reason enough to generalize?

    Read the article

  • How to install Gyachi on ubuntu 12.10

    - by Oguz Can Sertel
    I would like to use Gyachi on ubuntu 12.10. I tried these steps but it doesn't work.. I wanted to compile it myself... but it need some libs... it made me confused... so I gave up sudo add-apt-repository ppa:adilson/experimental sudo apt-get update sudo apt-get install gyachi Thank you for your helps at first command the output: sudo add-apt-repository ppa:adilson/experimental You are about to add the following PPA to your system: Contains packages that are not in the official Debian/Ubuntu repositories and newer versions and snapshots which are not available yet in the repositories. Theses packages are experimental. Use them at your own risk. More info: https://launchpad.net/~adilson/+archive/experimental Press [ENTER] to continue or ctrl-c to cancel adding it gpg: keyring `/tmp/tmp3y3i7p/secring.gpg' created gpg: keyring `/tmp/tmp3y3i7p/pubring.gpg' created gpg: requesting key 27B81625 from hkp server keyserver.ubuntu.com gpg: /tmp/tmp3y3i7p/trustdb.gpg: trustdb created gpg: key 27B81625: public key "Launchpad Experimental Packages PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) OK and after sudo apt-get update; this is (sudo apt-get install gyachi)'s output here is the output: sudo apt-get install gyachi Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package gyachi

    Read the article

  • Install tmux on Mac OS X

    - by unixben
    This is a short run down on how to get tmux running on your Mac OS X system. The same methodology applies when compiling this on Solaris. What is tmux? According to the developer's page, "tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached". Why not just use screen? For me, the primary reason I switched to tmux from screen is the much easier configuration syntax that tmux offers. If you've ever struggled with formatting screen's caption or hardstatus line, then you will appreciate the ease with which you can achieve the same results in tmux. Preparing your environment You will need a C compiler installed. I believe that OS X ships by default with GNU make, but if not, then you will need to obtain it or use Xcode. Download the sources While I'm putting all this together, I like to keep everything neatly tucked away in a build directory. mkdir ~/build cd ~/build curl -OL http://downloads.sourceforge.net/tmux/tmux-1.5.tar.gz curl -OL http://downloads.sourceforge.net/project/levent/libevent/libevent-2.0/libevent-2.0.16-stable.tar.gz Unpack the sources tar xzf tmux-1.5.tar.gz tar xzf libevent-2.0.16-stable.tar.gz Compiling libevent cd libevent-2.0.16-stable ./configure --prefix=/opt make sudo make install Compiling tmux cd ../tmux-1.5 LDFLAGS="-L/opt/lib" CPPFLAGS="-I/opt/include" LIBS="-lresolv" ./configure --prefix=/opt make sudo make install That's all there is to it!

    Read the article

  • Configuration Tips for better Performance with ADF Mobile Apps

    - by SRINI INDLA
    Some tips to keep in mind to make sure ADF Mobile application's performance is optimal: 1. Select release mode in deployment profile. This is perhaps the most important thing to remember to ensure best performance for ADF Mobile Apps. Selecting this option causes the deployer to package optimized JVM and minified JS libs with the mobile app there by significantly improving the over all performance of the application. 2. For iOS you do not need to do anything else other than selecting  release mode in deploy profile. However, on Android you have to create a keystore and configure it in JDev --> Tools --> Preferences --> ADF Mobile --> Platforms : Android as shown in the snapshot below 3. Steps for generating the Keystore for Android using keytool :  4. Logging level setting in logging.properties: Make sure the log level is set to SEVERE for both framework logger as well as the application logger as follows oracle.adfmf.framework.level=SEVERE oracle.adfmf.application.level=SEVERE 5. When using SOAP WebServices with WebService Data Control make sure you select the option to copy the WSDL. This will cause the JDev to download the WSDL and all the XSDs referenced by the WSDL from the server at design time and package them with the application during deployment. This way the application does not incur the cost of downloading these resources at run time from the device.

    Read the article

  • How to deal with the need to know multiple programming languages? When to stop learning new languages?

    - by Raphael
    I am a relatively young programmer. I am 23 and I have been programming professionally for about 5 years. As most programmers I started with C, learned some x86 assembly for fun and then I found C++ which turned out to be my greatest passion in the programming world. Programming with C and C++ forces you to learn platform specific APIs, libs and frameworks all of each requires constant study and experimentation. After some time I had to move on to Java and C# as the demand on my region is basically for these languages. With these languages I entered the world of web development and then I had to learn javascript. Developing for the .NET Framework was exciting at first but I constantly felt as I was getting tied up by Microsoft (and of course the .NET Framework was driving me away from Linux). For desktop development I could do pretty much everything I did with .NET using C++ with Qt but for web development I had to look for an alternative. Quickly I found Django and then I proceeded to learn Python so I could use Django. Nowadays I am learning iOS development with Objective-C. So far it was pretty much easy to learn all these languages (C++ trained me well) but I am worried that someday I won't be able to keep track of them all. Just to clarify. The only languages I learned cause I had to were C# and Java. All of the others I learned for fun, because I love programming and learning new things. Also I like to keep my skills sharp on desktop, web and mobile development. My question is: How do you keep track of multiple programming languages? (I mean, keep track of changes to these languages and keep your skills sharp) and: Is there such a thing as enough programming languages?

    Read the article

  • Entity system in Lua, communication with C++ and level editor. Need advice.

    - by Notbad
    Hi!, I know this is a really difficult subject. I have been reading a lot this days about Entity systems, etc... And now I'm ready to ask some questions (if you don't mind answering them) because I'm really messed. First of all, I have a 2D basic editor written in Qt, and I'm in the process of adding entitiy edition. I want the editor to be able to receive RTTI information from entities to change properties, create some logic being able to link published events to published actions (Ex:A level activate event throws a door open action), etc... Because all of this I guess my entity system should be written in scripting, in my case Lua. In the other hand I want to use a component based design for my entities, and here starts my questions: 1) Should I define my componentes en C++? If I do this en C++ won't I loose all the RTTI information I want for my editor?. In the other hand, I use box2d for physics, if I define all my components in script won't it be a lot of work to expose third party libs to lua? 2) Where should I place the messa system for my game engine? Lua? C++?. I'm tempted to just have C++ object to behave as servers, offering services to lua business logic. Things like physics system, rendering system, input system, World class, etc... And for all the other things, lua. Creation/Composition of entities based on components, game logic, etc... Could anyone give any insight on how to accomplish this? And what aproach is better?. Thanks in advance, HexDump.

    Read the article

  • Is there a shell-independent HUD-like menu search tool for Gnome?

    - by Redsandro
    The Ubuntu HUD - you love it or you hate it. Personally I rather like a classic desktop, so I use Xfce and Cinnamon, and I don't want to lose my menu in applications. But the HUD is pretty awesome when your menus are complex and you forgot where an option sits. This makes that search trick very interesting. I know the HUD is Unity specific. I am looking for a HUD-like tool to complement the menu in shells other than Unity. There is Appmenu Runner for KDE that does this. There is also appmenu-qt for KDE. Problem with the above is that it uses KDE libs, and it only works for KDE apps. This is Linux, there aught to be something like this for GNOME/GTK apps, right? Looking for any tool that can search the menus. I already use(d) Kupfer and Gnome-do, something like that would suffice if only it includes searching the menus for the currently focussed application.

    Read the article

  • I Can't run the Netbeans but I installed successfully

    - by David
    I'm new to Ubuntu as well as Netbeans. I installed Netbeans, and I've made sure to install all the JDKs and JREs I could find. It installed without errors. I also saw this question and made sure I followed all the instructions there as well. I never got any error messages of any kind. So far as I know, it installed okay. However, when I try to run Netbeans, I get the message in the bottom of the Netbeans IDE like this: ant -f /root/NetBeansProjects/samp1 -Djsp.includes=/root/NetBeansProjects/samp1/build/web/one.jsp -DforceRedeploy=false -Dclient.urlPart=/one.jsp -Ddirectory.deployment.supported=true -Djavac.jsp.includes=org/apache/jsp/one_jsp.java -Dnb.wait.for.caches=true run /root/NetBeansProjects/samp1/nbproject/build-impl.xml:774: The libs.CopyLibs.classpath property is not set up. This property must point to org-netbeans-modules-java-j2seproject-copylibstask.jar file which is part of NetBeans IDE installation and is usually located at <netbeans_installation>/java<version>/ant/extra folder. Either open the project in the IDE and make sure CopyLibs library exists or setup the property manually. For example like this: ant -Dlibs.CopyLibs.classpath=a/path/to/org-netbeans-modules-java-j2seproject-copylibstask.jar BUILD FAILED (total time: 0 seconds)

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >