Search Results

Search found 3984 results on 160 pages for 'modules'.

Page 47/160 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Team Software Development using Ruby on Rails

    - by Panoy
    I used to work alone on small to medium sized programming projects before and have no experience working in a team environment. Currently, there will be 3 of us in an in-house software development team that is tasked to develop a number of software for an academic institution. We have decided to use the web for the majority of the projects and are planning to choose Ruby on Rails for this and I would like to ask for your inputs, advices and approaches with regards to software development as a team using the RoR web framework. One thing that has really confounded me is how you divide the programming tasks of a project if there are 3 of you that are really doing the coding. It’s obvious that we as developers approach a problem in a modular way and finish it one after another. If the project consists of 3 modules, should each one of us focus on each of those modules? Would it be faster that way? How about if the 3 of us would focus on one module first (that’s what I really prefer). Is using a distributed version control system such as Git the answer to this type of problem? Please don’t forget to put your tips and experiences with regards to team software development. Cheers!

    Read the article

  • Compiling custom kernel 3.7.x lowlatency on Ubuntu 12.04

    - by FlabbergastedPickle
    All, I have a peculiar problem with trying to compile a lowlatency flavor of the latest 3.7 kernel. I retrieved the prepatched source from the launchpad using bzr, compiled it using the usual make-kpkg using the current config file plus default options for the rest, installed the kernel and booted into it. Everything works except for the fglrx and wl drivers that I had to install in the original 12.04 lowlatency kernel. So, I tried recompiling these and succeeded with both of them (no errors were reported)--wl driver required a minor adjustment to system.h include while latest fglrx 12.11 beta11 (released yesterday, Dec. 3rd, 2012) compiled without the hitch. Yet, when I try to modprobe either module (both having in common the fact that they were built after the kernel, fglrx as a deb, and wl via the usual make/make install), I get "FATAL: no MODULENAME module found" (MODULENAME being either wl or fglrx). The graphic driver watermark shows 3D crossed out and "for testing purposes" (or "unsupported hardware," can't remember), and no fglrx or wl is loaded. More mysteriously, dmesg shows no attempt on kernel's behalf to load the said drivers, even though they are clearly in the right /lib/modules/KERNEL_VERSION folder. How is this possible? Has something fundamentally changed in 3.7 kernel that would prevent modprobing of these? I know that there is driver signing option that was merged recently but as far as I could tell the kernel config file generated by the build process had that disabled. OTOH, while building wl driver, I did get a warning that the driver was not signed... Then again, even if the kernel disallowed loading of those modules, shouldn't dmesg reflect that? Any thoughts on this one are most appreciated.

    Read the article

  • Hijacking ASP.NET Sessions

    - by Ricardo Peres
    So, you want to be able to access other user’s session state from the session id, right? Well, I don’t know if you should, but you definitely can do that! Here is an extension method for that purpose. It uses a bit of reflection, which means, it may not work with future versions of .NET (I tested it with .NET 4.0/4.5). 1: public static class HttpApplicationExtensions 2: { 3: private static readonly FieldInfo storeField = typeof(SessionStateModule).GetField("_store", BindingFlags.NonPublic | BindingFlags.Instance); 4:  5: public static ISessionStateItemCollection GetSessionById(this HttpApplication app, String sessionId) 6: { 7: var module = app.Modules["Session"] as SessionStateModule; 8:  9: if (module == null) 10: { 11: return (null); 12: } 13:  14: var provider = storeField.GetValue(module) as SessionStateStoreProviderBase; 15:  16: if (provider == null) 17: { 18: return (null); 19: } 20:  21: Boolean locked; 22: TimeSpan lockAge; 23: Object lockId; 24: SessionStateActions actions; 25:  26: var data = provider.GetItem(HttpContext.Current, sessionId.Trim(), out locked, out lockAge, out lockId, out actions); 27:  28: if (data == null) 29: { 30: return (null); 31: } 32:  33: return (data.Items); 34: } 35: } As you can see, it extends the HttpApplication class, that is because we need to access the modules collection, for the Session module. Use with care!

    Read the article

  • VMware Kernel Module Updater hangs on Ubuntu 13.04

    VMware Player has a nice auto-detection of kernel changes, and requests the user to compile the required modules in order to load them. This happens from time to time after a regular update of your system. Usually, the dialog of VMware Kernel Module Updater pops up, asks for root access authentication, and completes the compilation. VMware Player or Workstation checks if modules for the active kernel are available. In theory this is supposed to work flawlessly but in reality there are pitfalls occassionally. With the recent upgrade to Ubuntu 13.04 Raring Ringtail and the latest kernel 3.8.0-21 the actual VMware Kernel Module Updater simply disappeared and the application wouldn't start as expected. When you launch VMware Player as super user (root) the dialog would stall like so: VMware Kernel Module Updater stalls while stopping the services Prior to version 5.x of VMware Player or version 7.x of VMware Workstation you would run a command like: $ sudo vmware-config.pl to resolve the module version conflict but this doesn't work anyway. Solution Instead, you have to execute the following line in a terminal or console window: $ sudo vmware-modconfig --console --install-all Those switches are (as of writing this article) not documented in the output of the --help switch. But VMware already documented this procedure in their knowledge base: VMware Workstation stops functioning after updating the kernel on a Linux host (1002411). Update As of today I had the first kernel upgrade to version 3.8.0-22 in Ubuntu 13.04. Don't even try it without vmware-modconfig...

    Read the article

  • Update kernel patch for VMware Player 4.0.3

    As I stated some days ago, after upgrading to Ubuntu Precise Pangolin, aka 12.04 LTS, I had a minor obstacle with VMware products. Today, VMware offered to upgrade to Player 4.0.3 due to security-related reasons. Initially, I thought that this update might have the patch for kernel 3.2.0 integrated but sadly that is not the case. 'Hacking' the kernel patch My first intuitive try to run the existing patch against the sources of VMware Player 4.0.3 failed, as the patch by Stefano Angeleri (weltall) is originally written explicitely against Workstation 8.0.2 and Player 4.0.2. But this is nothing to worry about seriously. Just fire up your favourite editor of choice and modify the version signature for VMware Player, like so: nano patch-modules_3.2.0.sh And update line 8 for the new version: plreqver=4.0.3 Save the shell script and run it as super-user (root): sudo ./patch-modules_3.2.0.sh In case that you previously patched your VMware sources you have to remove some artifacts beforehand. Otherwise, the patch script will inform you like so: /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting In that case, simply remove the 'hidden' file and run the shell script again: sudo rm /usr/lib/vmware/modules/source/.patchedsudo ./patch-modules_3.2.0.sh To finalise your installation either restart the vmware service or reboot your machine. On first start VMware will present you their EULA which you have to accept, and everything gets back to normal operation mode. Currently, I would assume that in case of VMware Workstation 8.0.3 you can follow the same steps as just described.

    Read the article

  • Error while installing Komparator4

    - by Lucio
    I downloaded Komparator source from this page. The INSTALL file in the source say the following: Unpack komparator4-xxx.tar.bz2, and open a shell inside this directory mkdir build cd build cmake -DCMAKE_INSTALL_PREFIX=`kde4-config --prefix` .. make sudo make install I unpacked the file, make the directory, entered this, but when I have tried to cmake (sentece Nº3) the terminal print the following errors disabling me to make & install: CMake Error at /usr/share/cmake-2.8/Modules/FindKDE4.cmake:98 (MESSAGE): ERROR: cmake/modules/FindKDE4Internal.cmake not found in /home/lucio/.kde/share/apps;/usr/share/kde4/apps Call Stack (most recent call first): CMakeLists.txt:2 (find_package) CMake Warning (dev) in CMakeLists.txt: No cmake_minimum_required command is present. A line of code such as cmake_minimum_required(VERSION 2.8) should be added at the top of the file. The version specified may be lower if you wish to support older CMake versions for this project. For more information run "cmake --help-policy CMP0000". This warning is for project developers. Use -Wno-dev to suppress it. -- Configuring incomplete, errors occurred! What mean this errors and how can I fix it?

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • Weird 302 Redirects in Windows Azure

    - by Your DisplayName here!
    In IdentityServer I don’t use Forms Authentication but the session facility from WIF. That also means that I implemented my own redirect logic to a login page when needed. To achieve that I turned off the built-in authentication (authenticationMode="none") and added an Application_EndRequest handler that checks for 401s and does the redirect to my sign in route. The redirect only happens for web pages and not for web services. This all works fine in local IIS – but in the Azure Compute Emulator and Windows Azure many of my tests are failing and I suddenly see 302 status codes where I expected 401s (the web service calls). After some debugging kung-fu and enabling FREB I found out, that there is still the Forms Authentication module in effect turning 401s into 302s. My EndRequest handler never sees a 401 (despite turning forms auth off in config)! Not sure what’s going on (I suspect some inherited configuration that gets in my way here). Even if it shouldn’t be necessary, an explicit removal of the forms auth module from the module list fixed it, and I now have the same behavior in local IIS and Windows Azure. strange. <modules>   <remove name="FormsAuthentication" /> </modules> HTH Update: Brock ran into the same issue, and found the real reason. Read here.

    Read the article

  • Develop in trunk and then branch off, or in release branch and then merge back?

    - by Torben Gundtofte-Bruun
    Say that we've decided on following a "release-based" branching strategy, so we'll have a branch for each release, and we can add maintenance updates as sub-branches from those. Does it matter whether we: develop and stabilize a new release in the trunk and then "save" that state in a new release branch; or first create that release branch and only merge into the trunk when the branch is stable? I find the former to be easier to deal with (less merging necessary), especially when we don't develop on multiple upcoming releases at the same time. Under normal circumstances we would all be working on the trunk, and only work on released branches if there are bugs to fix. What is the trunk actually used for in the latter approach? It seems to be almost obsolete, because I could create a future release branch based on the most recent released branch rather than from the trunk. Details based on comment below: Our product consists of a base platform and a number of modules on top; each is developed and even distributed separately from each other. Most team members work on several of these areas, so there's partial overlap between people. We generally work only on 1 future release and not at all on existing releases. One or two might work on a bugfix for an existing release for short periods of time. Our work isn't compiled and it's a mix of Unix shell scripts, XML configuration files, SQL packages, and more -- so there's no way to have push-button builds that can be tested. That's done manually, which is a bit laborious. A release cycle is typically half a year or more for the base platform; often 1 month for the modules.

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • How to customize the initrd embedded in or coming with the kernel image

    - by STATUS_ACCESS_DENIED
    I would like to add some tools and not just kernel modules into the initrd (initramfs-based). Now I'm aware of how to unpack and how to pack the initrd with cpio and have even written a hook for /etc/initramfs-tools/hooks in the past to integrate a third-party kernel module. However, while the available script libraries seem to be geared towards the integration of modules, none of them seems to be for integration of other entities (in particular programs and their dependencies). What options do I have to automate the integration of some useful tools for recovery into the initrd? I'm talking about the "rescue" system that the system drops into if it is unable to mount the root drive given to it by the boot loader. Please note that I don't want the SquashFS approach as is used for Live-CDs because for the issue at hand it will be by far sufficient to include some relatively small tools that aid in recovery of the system (when it gets stuck in initrd and can't boot further). Also, the machines when they run into the issue that we have had in the past tend to boot into the rescue system, but there a few tools are missing to kick the system back on trail ...

    Read the article

  • I Can't run the Netbeans but I installed successfully

    - by David
    I'm new to Ubuntu as well as Netbeans. I installed Netbeans, and I've made sure to install all the JDKs and JREs I could find. It installed without errors. I also saw this question and made sure I followed all the instructions there as well. I never got any error messages of any kind. So far as I know, it installed okay. However, when I try to run Netbeans, I get the message in the bottom of the Netbeans IDE like this: ant -f /root/NetBeansProjects/samp1 -Djsp.includes=/root/NetBeansProjects/samp1/build/web/one.jsp -DforceRedeploy=false -Dclient.urlPart=/one.jsp -Ddirectory.deployment.supported=true -Djavac.jsp.includes=org/apache/jsp/one_jsp.java -Dnb.wait.for.caches=true run /root/NetBeansProjects/samp1/nbproject/build-impl.xml:774: The libs.CopyLibs.classpath property is not set up. This property must point to org-netbeans-modules-java-j2seproject-copylibstask.jar file which is part of NetBeans IDE installation and is usually located at <netbeans_installation>/java<version>/ant/extra folder. Either open the project in the IDE and make sure CopyLibs library exists or setup the property manually. For example like this: ant -Dlibs.CopyLibs.classpath=a/path/to/org-netbeans-modules-java-j2seproject-copylibstask.jar BUILD FAILED (total time: 0 seconds)

    Read the article

  • I can't hear any sounds on ubuntu 11.10 on Dell inspiron N5010

    - by Ahmed
    I have a problem that I can't hear any sounds and I don't know where to start. I did the following : lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) Subsystem: Dell Device 0447 Flags: bus master, fast devsel, latency 0, IRQ 48 Memory at fbf00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel -- 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] Subsystem: Dell Device 0447 Flags: bus master, fast devsel, latency 0, IRQ 49 Memory at fbe40000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel And It seems that I have 2 soundcards. Is that normal ?? I also did this: aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 Also on the sound setting GUI. I have 2 hardware profiles for sound cards but none of them works when I test the speakers. Where should I start searching ?

    Read the article

  • F# Application Entry Point

    - by MarkPearl
    Up to now I have been looking at F# for modular solutions, but have never considered writing an end to end application. Today I was wondering how one would even start to write an end to end application and realized that I didn’t even know where the entry point is for an F# application. After browsing MSDN a bit I got a basic example of a F# application with an entry point [<EntryPoint>] let main args = printfn "Arguments passed to function : %A" args // Return 0. This indicates success. 0 Pretty simple stuff… but what happens when you have a few modules in a program – so I created a F# project with two modules and a main module as illustrated in the image below… When I try to compile my program I get a build error… A function labeled with the 'EntryPointAttribute' attribute must be the last declaration in the last file in the compilation sequence, and can only be used when compiling to a .exe… What does this mean? After some more reading I discovered that the Program.fs needs to be the last file in the F# application – the order of the files in a F# solution are important. How do I move a source file up or down? I tried dragging the Program.fs file below ModuleB.fs but it wouldn’t allow me to. Then I thought to right click on a source file and got the following menu.   Wala… to move the source file to the bottom of the solution you can select the “Move Up” or “Move Down” option. Now that I got this right I decided to put some code in ModuleA & ModuleB and I have the start of a basic application structure. ModuleA Code namespace MyApp module ModuleA = let PrintModuleA = printf "hello a \n" ()   ModuleB Code namespace MyApp module ModuleB = let PrintModuleB = printf "hello b \n" ()   Program Code // Learn more about F# at http://fsharp.net #light namespace MyApp module Main = open System [<EntryPoint>] let main args = ModuleA.PrintModuleA let endofapp = Console.ReadKey() 0

    Read the article

  • Why am I getting this error while installing gnome?

    - by Sreehari Rajendran
    i have raring ringtail. I installed gnome a couple days ago. I try to install extensions but the site says I don't have the latest version. I type the command sudo apt-get install gnome-shell and I get this error Reading package lists... Done Building dependency tree Reading state information... Done gnome-shell is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 138 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.103ubuntu0.7) ... update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-25-generic cp: cannot stat ‘/module-files.d/libpango1.0-0.modules’: No such file or directory cp: cannot stat ‘/modules/pango-basic-fc.so’: No such file or directory E: /usr/share/initramfs-tools/hooks/plymouth failed with return 1. update-initramfs: failed for /boot/initrd.img-3.8.0-25-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) why?

    Read the article

  • How to enable a Web portal-based rich enterprise platform on different domains and hosts using JS without customization/ server configuration

    - by S.Jalali
    Our company Coscend has built a Web portal-based communications and cloud collaboration platform by using JavaScript (JS), which is embedded in HTML5 and formatted with CSS3. Other technologies used in the core code include Flash, Flex, PostgreSQL and MySQL. Our team would like to host this platform on five different Windows and Linux environments that run different types of Web servers such as IIS and Apache. Technical challenge: Each of these Windows and Linux servers have a different host name and domain name (and IP address), but we would like to keep our enterprise platform independent of host server configuration. Possible approach to solution: We think an API (interface module with a GUI) is needed to accomplish this level of modularity and flexibility while deploying at our enterprise customers. Seeking your insights: In this context, our team would appreciate your guidance on: Is there an algorithmic method to implement this Web portal-based platform in these Windows and Linux environments while separating it from server configuration, i.e., customizing the host name, domain name and IP address for each individual instance? For example, would it be suitable to create some JS variables / objects for host name and domain name and call them in the different implementations? If a reference to the host/domain names occurs on hundreds of portal modules, these variables or JS objects would replace that. If so, what is the best way to make these object modules written in JS portable and re-usable across different environments and instances for enterprise customers? Here is an example of the implemented code for the said platform. The following Web site (www.CoscendCommunications.com) was built using this enterprise collaboration platform and has the base code examples of the platform. This Web site is domain-specific. We like to make the underlying platform such that it is domain and host-independent. This will allow the underlying platform to be deployed in multiple instances of our enterprise customers.

    Read the article

  • AR242x / AR542x wireless card not working

    - by Pipan87
    My wifi worked perfect until I updated to the latest version of Ubuntu. Now I don't find any wireless connections at all. I have tried lots of guides on the internet but I can't get it to work. I did however start to work once after writing something I don't remember in Terminal, but after rebooting it stopped working again. Some info (don't know if you need more to help): 01:00.0 Ethernet controller: Atheros Communications AR8121/AR8113/AR8114 Gigabit or Fast Ethernet (rev b0) Subsystem: Acer Incorporated [ALI] Device 022c Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at 55200000 (64-bit, non-prefetchable) [size=256K] I/O ports at 3000 [size=128] Capabilities: [40] Power Management version 2 Capabilities: [48] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [58] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [180] Device Serial Number ff-93-2e-de-00-23-8b-ff Kernel driver in use: ATL1E Kernel modules: atl1e 02:00.0 Ethernet controller: Atheros Communications Inc. AR242x / AR542x Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Foxconn International, Inc. Device e00d Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at 54100000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Kernel driver in use: ath5k Kernel modules: ath5k

    Read the article

  • Single complex or multiple simple autoload functions [on hold]

    - by Tyson of the Northwest
    Using the spl_autoload_register(), should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\Templates else look in Root\Modules\Namespace else look in Root\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function if filename exists, include return true else return false is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?

    Read the article

  • RequireJS: JavaScript for the Enterprise

    - by Geertjan
    I made a small introduction to RequireJS via some of the many cool new RequireJS features in NetBeans IDE. I believe RequireJS, and the modularity and encapsulation and loading solutions that it brings, provides the tools needed for creating large JavaScript applications, i.e., enterprise JavaScript applications. &amp;amp;lt;span id=&amp;amp;quot;XinhaEditingPostion&amp;amp;quot;&amp;amp;gt;&amp;amp;lt;/span&amp;amp;gt; (Sorry for the wobbly sound in the above.) An interesting comment by my colleague John Brock on the above: One other advantage that RequireJS brings, is called lazy loading of resources. In your first example, everyone one of those .js files is loaded when the first file is loaded in the browser. By using the require() call in your modules, your application will only load the javascript modules when they are actually needed. It makes for faster startup in large applications. You could show this by showing the libraries that are loaded in the Network Monitor window. So I did as suggested: Click the screenshot to enlarge it and notice how the Network Monitor is helpful in the context of RequireJS troubleshooting.

    Read the article

  • WIFI/LAN not working after Installing Ubuntu 13.10

    - by user183025
    I can't connect to my WIFI after installing Ubuntu 13.10. I tried re-installing it three times (thinking that I did something wrong during the installation) but I still can't connect to my WIFI. It just gives me the message that I'm offline and that I can't connect to my WIFI. I also tried connecting to the internet using LAN cable but with the same results. I tried google but it seems that there's no solution to this yet.... Anybody knows how to solve this? Thanks! FYI: H/W path Device Class Description ====================================================== system RC530/RC730 (To be filled by O.E.M.) /0 bus RC530/RC730 /0/0 memory 64KiB BIOS /0/4 processor Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz /0/4/5 memory 32KiB L1 cache /0/41 memory 4GiB System Memory /0/41/0 memory DIMM [empty] /0/41/1 memory 4GiB SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) /0/100 bridge 2nd Generation Core Processor Family DRAM Controller 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 130 (rev 34) Subsystem: Intel Corporation Centrino Wireless-N 130 BGN Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7200000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) Subsystem: Samsung Electronics Co Ltd Device c0c1 Flags: bus master, fast devsel, latency 0, IRQ 41 I/O ports at b000 [size=256] Memory at f2104000 (64-bit, prefetchable) [size=4K] Memory at f2100000 (64-bit, prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • VMware Player 5.0 or VMware Workstation 9.0 after upgrade to Ubuntu 12.10

    The upgrade process Upgrading Ubuntu 12.04 to latest version 12.10 - aka Quantal Quetzal - is straight forward and you only need to follow the offical upgrade instructions. Short version on the console looks like this: sudo do-release-upgrade This will update the repository entries, and start the upgrade process. After some minutes or hours of download and installation, you have to reboot your system once to get the new kernel loaded. As time of writing, I'm on '3.5.0-17-generic'. And as with any modification of the kernel version, you have to compile the necessary kernel modules to get VMware Player or Workstation up and running. Usually, this happens the first time you try start your VMware software and that's it. Well, again not so this time. Getting the kernel patch Luckily, the community over VMware is very active and you can get a new kernel patch in the online forums here. Get the download and put in a folder have write permissions. Then you extract the archive on the console like so: tar -xjvf vmware9_kernel35_patch.tar.bz2 Then you change into the newly created folder: cd vmware9_kernel3.5_patch/ And you execute the available shell script as root (superuser) like so: sudo ./patch-modules_3.5.0.sh This will stop any running instances of VMware software, patches the source files and runs the compile process for your active environment. This might take some time depending on your machine, and once completed you can start VMware Player or Workstation as previously. In case that you are going to apply the patch again, the script will simply quit with the following output: /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting You might remove the .patched file in case that you upgraded/changed your kernel and you need to apply the patch again. Disclaimer: The patch is "as-is" and the patcher is originally created by Artem S. Tashkinov, and later modified by An_tony. Please refer to the VMware forum in case of questions or problems. There are also patches available for older versions of VMware Player or Workstation.

    Read the article

  • "Unmet Dependencies" problem when trying apt-get install

    - by GChorn
    Anytime I try to install python packages using the command: sudo apt-get install python-package I get the following output: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-headers-generic : Depends: linux-headers-3.2.0-36-generic but it is not going to be installed linux-headers-generic-pae : Depends: linux-headers-3.2.0-36-generic-pae but it is not going to be installed linux-image-generic : Depends: linux-image-3.2.0-36-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). This seems to have started when these same three packages showed up in Ubuntu's Update Manager and kicked an error when I tried to install them there. Based on the suggestion in the output above, I tried running: sudo apt-get -f install But this only gave me several instances of the following error: dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-36-generic_3.2.0-36.57_i386.deb (--unpack): unable to create `/lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko.dpkg-new' (while processing `./lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko'): No space left on device Now maybe I'm way off-base here, but I'm wondering if the error could be coming from the "No space left on device" part? The thing is, I'm running Ubuntu as a VirtualBox VM but I've got it set to dynamically increase its virtual hard drive space as needed, so why am I still getting this error? Here's my output when I use dh -f: Filesystem Size Used Avail Use% Mounted on /dev/sda1 6.9G 5.7G 869M 88% / udev 494M 4.0K 494M 1% /dev tmpfs 201M 784K 200M 1% /run none 5.0M 0 5.0M 0% /run/lock none 501M 76K 501M 1% /run/shm VB_Shared_Folder 466G 271G 195G 59% /media/sf_VB_Shared_Folder When I perform sudo apt-get -f install and the system says, After this operation, 192 MB of additional disk space will be used. Does that mean 192 MB of my virtual machine's current memory, or 192 MB on top of the rest of my free space? As I said, my machine normally dynamically allocates additional memory from the host machine, so I don't see why there would be memory restrictions at all...

    Read the article

  • No sound, can't adjust alsamixer volume

    - by user734613
    I have no sound on my ubuntu and I do not know why. I saw a lot of posts about that, I tried a lots of solutions that I found (especially this one) but it still does not work. I suspect alsamixer. I can't adjust the sound. http://img15.hostingpics.net/pics/447765Screenshotfrom20120621140430.png I tried to reinstall alsa mixer using this tutorial but it seems the package does not exist anymore. E: Unable to locate package linux-alsa-driver-modules-3.2.0-25-generic E: Couldn't find any package by regex 'linux-alsa-driver-modules-3.2.0-25-generic' Here are my configuration: cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 295.40 Thu Apr 5 21:37:00 PDT 2012 GCC version: gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) aplay -l List of PLAYBACK Hardware Devices ** card 0: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 grep eld_valid /proc/asound/NVidia/eld* /proc/asound/NVidia/eld#0.0:eld_valid 0 /proc/asound/NVidia/eld#0.1:eld_valid 0 Thanks in advance for your help.

    Read the article

  • XML Parsing Error at 1:1544. Error 4: not well-formed (invalid token)

    - by Steve
    I have installed Joomla 1.5.22 on a new hosting account, which doesn't have a domain yet, so it's public URL is http://cp-013.micron21.com/~annimac/ A message saying: XML Parsing Error at 1:1544. Error 4: not well-formed (invalid token). The source code for this message is: <dl id="system-message"> <dt class="error">Error</dt> <dd class="error message fade"> <ul> <li>XML Parsing Error at 1:1544. Error 4: not well-formed (invalid token)</li> </ul> </dd> </dl> There is nothing in /logs to indicate what the problem is. I have uploaded the following folders from a freshly unzipped copy of Joomla 1.5.22: administrator components includes language\en-GB libraries modules plugins templates\ja_purity xmlrpc and the issue remains. I have no custom or additional plugins, modules, or components installed. If I change templates, the problem remains. What is the problem?

    Read the article

  • Not able to install LTT tool in 10.04

    - by Ashoka
    I have tried the below commands to install LTT on VMware running Ubuntu 10.04, but got the following error. Please help. From link : https://launchpad.net/~lttng/+archive/ppa $ sudo apt-add-repository ppa:lttng/ppa $ sudo apt-get update $ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace user@usr:~$ sudo apt-add-repository ppa:lttng/ppa Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv C541B13BD43FA44A287E4161F4A7DFFC33739778 gpg: requesting key 33739778 from hkp server keyserver.ubuntu.com gpg: unable to execute program `/usr/local/libexec/gnupg/gpgkeys_curl': No such file or directory gpg: no handler for keyserver scheme `hkp' gpg: keyserver receive failed: keyserver error user@usr:~$ user@usr:~$ sudo apt-get update ..... $ user@usr:~$ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package lttng-tools user@usr:~$ My systemm details: user@usr:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS" user@usr:~$ user@usr:~$ user@usr:~$ uname -a Linux usr 2.6.32-42-generic #95-Ubuntu SMP Wed Jul 25 15:57:54 UTC 2012 i686 GNU/Linux user@usr:~$ Regards, Ashoka

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >