Search Results

Search found 22263 results on 891 pages for 'desktop background'.

Page 604/891 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • How to get sharing options to stick - ubuntu 12.04

    - by Devin
    I'm having a really hard time trying to get my sharing set up on my 12.04 system. I've tried both desktop version and server - I'm a bit of a linux n00b, so I need a GUI, command line is beyond me and no time to learn it (not till after I get the shares setup, at least) My problem is, whenever I try to set permissions in Nautilus, it just reverts back to the default which is to "none" Basically when I choose an option... it doesn't stick. I can create shares, and it asks me if I want to add permissions automatically - but they do not stick either. When I go to look at the shared folders in Windows (or even on my Android Phone, or Mac) it gives me permissions errors and doesn't let me log in, despite me clicking "allow guest access" I have no idea what to do or where to go. I've tried searching forums and google, and I've tried everything I come across - no avail. I've even tried Mint builds to see if it's different, no change there either. Please help! I really want to setup a server to share my media files and do backups in my house. Thanks for your help!

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Correct nvidia+intel graphics setup in 14.04

    - by Espressofa
    Just upgraded to 14.04 to try to fix some other issues. Now, something has gone wrong with my graphics. I have a Thinkpad T530 with Intel and Nvidia graphics cards. $ inxi -SGx System: Host: xyz Kernel: 3.13.0-24-generic x86_64 (64 bit, gcc: 4.8.2) Desktop: N/A Distro: Ubuntu 14.04 trusty Graphics: Card-1: Intel 3rd Gen Core processor Graphics Controller bus-ID: 00:02.0 Card-2: NVIDIA GF108M [NVS 5400M] bus-ID: 01:00.0 X.Org: 1.15.1 drivers: fbdev,vesa,intel,nouveau (unloaded: nvidia) Resolution: [email protected] GLX Renderer: N/A GLX Version: N/A Direct Rendering: N/A $ glxinfo name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". I'm not sure what I did but now something is wrong with my graphics, as should be visible from the above commands. nvidia-detector says "none" as well. I used to have bumblebee but then some website said to remove it and now something's clearly wrong. What's the right way to set things up? Should I try to add bumblebee back? Here's what's installed now: $ dpkg --get-selections | grep nvidia nvidia-319 install nvidia-331 install nvidia-libopencl1-331 install nvidia-opencl-icd-331 install nvidia-prime install nvidia-settings install nvidia-settings-319 install

    Read the article

  • Unity3d web player fails to load textures

    - by José Franco
    I'm having a problem with Unity3d Web Player. I have developed a project and succesfully deployed it in a web app. It works with absolutely no problem on my PC. This app is to be installed on two identical machines. I have installed them in both and it only works properly in one. The issue I have is on a computer it fails to properly load the models and textures, so the game runs but instead of the models I can only see black rectangles on a blue background. It has the same problem with all browsers and I get no errors either by the player or by JavaScript. The only difference between these computers is that one that has the problem is running on Windows 8.1 and the other one on Windows 8 only. Could this be the cause of the issue? It works fine on my computer with Windows 8.1. However both of the other computers have specs that are significantly lower than mine. I have already searched everywhere and it seems that it has to do with the individual games, however I think it may have to do with the computer itself because it runs properly in the other two. The specs on the computes I'm installing the app on are as follows: Intel Celeron 1.40 GHz, 2GB RAM, Intel HD Graphics If anybody could point me in the right direction I would be very grateful I forgot to mention, I'm running Unity Web player 4.3.5 and the version on the other two computers is 4.5.0

    Read the article

  • Where to put business logic in MVC design?

    - by BriskLabs Pakistan
    I have created a simple MVC java application that adds records through data forms to a database. my app collects data, it also validates it and stores it. This is because the data is being sourced online from different users. the data is mostly numeric in nature. now on the numeric data being stored into database (SQL server) , i wish that my app should be able to perform computations... and display it. the user is not interested in how computations are done so they must be encapsulated. the user must only be able to view the simple computed data which for example A column data - B Column data / C column data etc... and just display it to the user... i know how to write stored procedures for same but i want a 3 tier app I want the data, that I put into the database as a record, worked upon by performing calculations on it. However, the original data should remain unaffected, while the new data, post-calculations, must be stored as a new entity record into the database. Where should I write the code for this background calculation? As it is the rules and business logic... in a new java beans files ?

    Read the article

  • Why does running this program on 11.10 give a 'GLIBC_2.15 not found' error?

    - by RafLance
    I am trying to install Absinthe 2.0.4 on Ubuntu 11.10 on a netbook. When I try to run the install file, this keeps on happening: rafael@RafLaptop:~/Desktop/absinthe-linux-2.0.4$ ./absinthe.x86 ./absinthe.x86: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by ./absinthe.x86) Do I need to upgrade GLIBC? If so, how do I do that? Since I'm on a netbook I can't use a LiveCD so I wanted to know if there was a way I could fix this issue without reinstalling my whole OS. Any explanations about what GLIBC is exactly would be great too since this is a learning experience for me. I know that GLIBC is a part of libc.so.6 and so I tried to run sudo apt-get install libc.so.6 but was told that it was up to date. But GLIBC isn't? I hope this articulates my problem well, if there are any pieces of missing info or questions to clarity my question, please let me know! ~-~ EDIT/UPDATE: So after some help on the AskUbuntu chat room from user izx, I have gathered the following information/understanding: -I need to run this program with Ubuntu 12.04 or recompile it from source -Upgrading libc on Oneiric to 2.15 while possible, is not an easy task and is not officially supported.

    Read the article

  • Ugly Boot Screen after upgrading to 12.10

    - by Sir Linuxalot
    Is there a way to change the ugly boot screen in 12.10? It seems to have rolled back to that 8-bit blocky looking thing with tiny orange dots underneath. It then breaks into process code under that, and it looks ghastly. I've read some tutorials on getting Plymouth to do some neat things, but they were for older versions of Ubuntu. I'm running a GeForce GTX 460 if that matters. Any help would be appreciated. Update: I've noticed/found a couple of things. The upgrade on my laptop didn't do this. It still uses the "normal" Ubuntu boot logo (using Plymouth, I assume). So, something is off with my desktop. And, I found and installed Super Boot Manager to see if that would help. With that, I enabled Plymouth and added a new theme, but the machine still boots with the block-ugly logo. Finally, I messed around with Grub on boot and added "nomodeset" after "quiet splash" and added it while deleting "quiet splash." None of these solutions worked. I'll keep hunting...

    Read the article

  • Ubuntu stops using Nvidia driver after kernel upgrade

    - by Daniel
    Just updated and restarted, Ubuntu's doesn't display correctly. After restart, the desktop now looks like this. I've temporarily switched to the Nouveau driver. The update history reveals the kernel was updated, amongst many things; and the following were installed: linux-image-3.5.0-19-generic (3.5.0-19.30) linux-image-extra-3.5.0-19-generic (3.5.0-19.30) I've encountered this type of problem quite recently, so I decided to reapply the same steps, to solve the problem, as follows: sudo apt-get install linux-headers-3.5.0-19 sudo apt-get install linux-headers-3.5.0-19-generic sudo depmod -a sudo modprobe nvidia sudo /etc/init.d/*dm restart When installing linux-headers-3.5.0-19-generic, I get an error, message from terminal as follows: Setting up linux-headers-3.5.0-19-generic (3.5.0-19.30) ... Examining /etc/kernel/header_postinst.d. run-parts: executing /etc/kernel/header_postinst.d/dkms 3.5.0-19-generic /boot/vmlinuz-3.5.0-19-generic Error! Problems with depmod detected. Automatically uninstalling this module. DKMS: Install Failed (depmod problems). Module rolled back to built state. However, I ignored the above error and continued the steps with sudo depmod -a, installed nvidia-current, then did sudo modprobe nvidia, which yielded the following error: FATAL: Error inserting nvidia_current (/lib/modules/3.5.0-19-generic/updates/dkms/nvidia_current.ko): No such device Upon restart, the Nvidia driver now works! BTW, do those error messages imply I broke something? Just curious, cause I don't want to get happy I've fixed it, then it stops working later on. The system is Dell XPS-L702X, with NVIDIA GeForce GT 555M, and 17" screen.

    Read the article

  • Tell me why I should bother using Linux if it's all about problems getting the OS to install or work properly? [closed]

    - by Vilhjalmur Magnussin
    Why should I spend day's trying to get Ubuntu to either install and/or work properly? I'm using an Acer Timeline X laptop and if I install 10.04 the wireless doesn't work, and if I try installing 11.04 it either won't install, or if it installs it's full of bugs causing my computer to freeze all the time. So please, I'm all open ears. Someone give me one or two good reasons to continue wasting time (in hope it eventually works) before I decide to focus my time on other things like productivity (using Windows like I've been doing successfully the last 10 years). This is the second time I give Ubuntu a try, the first time was in 2010 using Ubuntu Studio and Ubuntu Desktop, and it ended with me shifting back to Windows since I had spent more time getting everything to work than actually working while trying Ubuntu. I really don't understand why it needs to be like this. Why go on trying when all I see is forums full of discussions about problems which people are having difficulties fixing. Or maybe there is just one special type of computer which works well with Linux? Would very much like to know which computer that is. SO please, if it's not to much trouble I really want to here from someone who has something good to say about going through all this trouble just to get a working environment up and running since I already have a working environment up and running called Windows. Thanks, Villi.

    Read the article

  • Cleaning Up Online Games with Positive Enforcement

    - by Jason Fitzpatrick
    Anyone who has played online multiplayer games, especially those focused on combat, can attest to how caustic other players can be. League of Legends creators are fighting that, rather successfully, with a positive-reinforcement honor system. The Mary Sue reports: Here’s the background: Six months ago, Riot established Team Player Behavior — affectionately called Team PB&J — a group of experts in psychology, neuroscience, and statistics (already, I am impressed). At the helm is Jeffrey Lin, better known as Dr. Lyte, Riot’s lead designer of social systems. As quoted in a recent article at Polygon: We want to show other companies and other games that it is possible to tackle player behavior, and with certain systems and game design tools, we can shape players to be more positive. Which brings us to the Honor system. Honor is a way for players to reward each other for good behavior. This is divvied up into four categories: Friendly, Helpful, Teamwork, and Honorable Opponent. At the end of a match, players can hand out points to those they deem worthy. These points are reflected on players’ profiles, but do not result in any in-game bonuses or rewards (though this may change in the future). All Honor does is show that you played nicely. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • Ubuntu 13.10 No Sound

    - by spiersie
    I was running 13.04 since last monday and just today i upgraded to 13.10, in both of these version i have not managed to get my sound working. I have gone into alsamixer and disabled auto mute and the volumes are up. However if somebody thinks they can help me fix this i will gladly follow any steps. Please lay specifically any terminal commands you need me to do to either show specs or solve the problem as i am not fluent with the linux commands, this desktop being my first system to run linux, starting last monday. blake@Blake-Ubuntu-PC:~$ lspci -v | grep -A7 -i "audio" 00:01.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Trinity HDMI Audio Controller Subsystem: ASUSTeK Computer Inc. Device 8526 Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at fef44000 (32-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel 00:10.0 USB controller: Advanced Micro Devices, Inc. [AMD] FCH USB XHCI Controller (rev 03) (prog-if 30 [XHCI]) 00:14.2 Audio device: Advanced Micro Devices, Inc. [AMD] FCH Azalia Controller (rev 01) Subsystem: ASUSTeK Computer Inc. Device 8445 Flags: bus master, slow devsel, latency 32, IRQ 16 Memory at fef40000 (64-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 11)

    Read the article

  • Dual monitors, screen resolution, xorg.conf.d

    - by Flase
    I do a lot of RTFM but this one has got me stuck. I have Ubuntu Studio 12.04 Precise Pangolin with XFCE as its default desktop. My old HIS ATI Radeon 9250 graphics card was adding red crud across the screen with the generic driver, but downloading the proprietary "fglrx" driver makes it work cleanly. The trouble is the Catalyst control centre refuses to recognise my old card so I must do some manual configuring to make sure both the DVI and VGA monitors are capable of the correct screen resolution (both 1280x1024) and a dual display. It used to be easier to just edit the existing xorg.conf file and add another resolution and so forth, but now there are automatic xorg.conf.d directories (more than one) with scant documentation. Creating a generic xorg.conf with a terminal command creates every setting imaginable. What I want to do is create the simplest conf file which just tells the system the following: My VGA monitor can do 1280x1024 60Hz The two monitors together may be 2560x1024 width The VGA monitor on the right I might need to specify Xinerama if it's needed Thank you. I don't think I need to bore you with log files, but please ask for further info. Mike

    Read the article

  • Issues with Cinnamon?

    - by Corrodie
    I just recently switched my system over to Ubuntu 12.10, and decided on Cinnamon as my environment-- it all worked fine, at first. But I was poorly educated and started using Compiz and Emerald along with it--Setting both as replacements in startup processes. I now know, that's a big, big mistake. Now when loading Cinnamon, I am greeted to my background image, and only that. My only options seem to be to open a terminal. I was advised to attempt muffin --replace and mutter --replace Neither to any avail, the terminal closes, and I cannot load another one unless I completely reload. I went back to Unity, purged and autoremoved Cinnamon, emerald, and compizconfig, and attempted to reinstall Cinnamon, thinking that would solve the problem--no, it came back just as broken as before. So, I reinstalled ubuntu, then cinnamon---still broken. I'm assuming I must find a way to remove the replace commands-- but as I have no menu, I'm not positive I can do that. Is there any way I can access the startup processes via terminal? I'd think though, if I completely removed Cinnamon, all configurations would be gone too, so, it's just not making much sense. Is there some kind of reset I could possibly do? I've been browsing forums and questions here, all leading to things I'd already done, so, it can't hurt to ask for myself. I apologize if you would rather I have posted this over at mint. Next time, I will definitely check compatibility instead of assuming something just has to work. Any help is greatly appreciated, thanks! ****EDIT***** It seems although it didn't allow me to do it before, I was now allowed to access the settings and startup processes for Cinnamon via Unity, and, after quickly removing aforementioned processes- I'm up and running again.

    Read the article

  • Installing Ubuntu along with windows 7 on shrunk partition

    - by Thabo
    I am new to Ubuntu OS and ask Ubuntu community. First this is not a duplicate question. Actually this a question which is a summery of all solutions and questions were posted in this community, related to Install Ubuntu along with Windows 7. I have bought a new Hp laptop with its original windows 7.I want to install Ubuntu along with windows 7 64 bit. I ran the Ubuntu 12.4 Desktop installation CD. But Ubuntu installer doesn't show the "along with windows 7 option"only it is showing two options. I read some questions and answers posted on this community. Specially following link Ubuntu 12.04 does not see windows already install on my computer (dual installation) I tried following thinks, I ran the terminal in live CD and tried sudo dmraid -rE command and dmraid remove command .But terminals says there is no dmraid partitions. So I tried another scenario checked my partitions with g parted.There are some partitions labeled C,HP tools,Recovery and System. C is containing windows 7 Files. So I shrank the volume of C Drive. Now I have 50000Mb of unallocated disk. I tried with Gparted to create a partition on that allocated space.It says some thing that you can't create more than four primary partition.Of course all other four partitions were created on widows are actually type of primary partition. So I went back to Windows 7 and tried to create a new volume on unallocated space.But unfortunately it says,If i create a new volume it will be the type of Dynamic partition.It says we cant boot another OS from that partition. So i cancelled that step. Now i have 50000Mb unallocated space but how can i install Ubuntu on that partition without harming the existing Windows 7? Because still I have only two options: Erase and install Ubuntu. Try something else. (I can see my unallocated space by going to "something else" option.)

    Read the article

  • Ways to organize interface and implementation in C++

    - by Felix Dombek
    I've seen that there are several different paradigms in C++ concerning what goes into the header file and what to the cpp file. AFAIK, most people, especially those from a C background, do: foo.h class foo { private: int mem; int bar(); public: foo(); foo(const foo&); foo& operator=(foo); ~foo(); } foo.cpp #include foo.h foo::bar() { return mem; } foo::foo() { mem = 42; } foo::foo(const foo& f) { mem = f.mem; } foo::operator=(foo f) { mem = f.mem; } foo::~foo() {} int main(int argc, char *argv[]) { foo f; } However, my lecturers usually teach C++ to beginners like this: foo.h class foo { private: int mem; int bar() { return mem; } public: foo() { mem = 42; } foo(const foo& f) { mem = f.mem; } foo& operator=(foo f) { mem = f.mem; } ~foo() {} } foo.cpp #include foo.h int main(int argc, char* argv[]) { foo f; } // other global helper functions, DLL exports, and whatnot Originally coming from Java, I have also always stuck to this second way for several reasons, such as that I only have to change something in one place if the interface or method names change, that I like the different indentation of things in classes when I look at their implementation, and that I find names more readable as foo compared to foo::foo. I want to collect pro's and con's for either way. Maybe there are even still other ways? One disadvantage of my way is of course the need for occasional forward declarations.

    Read the article

  • What is the philosophy/reasoning behind C#'s Pascal-casing method names?

    - by Nocturne
    I'm just starting to learn C#. Coming from a background in Java, C++ and Objective-C, I find C#'s Pascal-casing its method-names rather unique, and a tad difficult to get used to at first. What is the reasoning and philosophy behind this? I'm guessing it is because of C# properties. Unlike in Objective-C, where method names can be exactly the same as an instance variables, this is not the case with C#. I would guess one of the goals with properties (as it is with most of the languages that support it) is to make properties truly indistinguishable from variables and methods. So, one can have an "int x" in C#, and the corresponding property becomes X. To ensure that properties and methods are indistinguishable, all method names I'm guessing are also therefore expected to start with an uppercase letter. (This is just my hypothesis based on what I know of C# so far—I'm still learning). I'm very curious to know how this curious guideline came into being (given that it's not something one sees in most other languages where method names are expected to start with a lowercase letter) (EDIT: By Pascal-casing, I mean PascalCase (which is basically camelCase but starting with a capital letter). Method names typically start with a lowercase letter in most languages)

    Read the article

  • Should You Delete Windows 7 Service Pack Backup Files to Save Space?

    - by The Geek
    After you install the Windows 7 Service Pack 1 that we mentioned yesterday, you might be wondering how to reclaim some of the lost drive space—which we’ll show you how today—but should you actually do it? Note: If you haven’t installed the new SP1 release yet, be sure to read our post explaining what it entails before you do. Spoiler: it’s mostly bugfixes. Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Read On Phone Pushes Data from Your Desktop to the Appropriate Android App MetroTwit is a Sleek Native Twitter Client for Your Windows System Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices

    Read the article

  • Can I install new version of Ubuntu in spair RAIDed partition with unetbootin

    - by artfulrobot
    I have Ubuntu 11.04 running on my home desktop which has 2 hard drives mirrored by RAID. The drives are partitioned with a big data partition, a swap partition and a couple of 20Gb partitions for OSes, one is 11.04 which is in use, and the other is kept spare for installing a later version. Which is what I'd like to do now. The idea of a 2nd partition for new OS is that I can try it, and if it's problematic, I can boot back into the original one - the machine is shared with others, so I need it to stay available! I have had horrible problems with software RAID after using a Live USB stick - basically it messes up the internal numbering of the RAID drives or something, anyway, the result is you can't boot after using it :-( and have to spend ages re-assembling the arrays, trying to remember grub commands etc etc. Quite a shocker when you consider booting from a Live USB is supposed not to affect the existing system. As I'm installing in a RAIDed disc, I would typically use the Alternative install (sad to hear that this is going to be dropped in future). However, I think I might be able to use unetbootin to trick the system into working on top of the existing system that understands RAID, with the normal ISO? If unetbootin loads from drives that are already understood to be RAIDED, then presumably it will only see md0... instead of sda, sdb... and as long as I don't need to repartition (I don't) it should be fine, right? Or is that just plain foolishness? Please tell me before I end up with a dead system (again!)

    Read the article

  • Sharing business logic between server-side and client-side of web application?

    - by thoughtpunch
    Quick question concerning shared code/logic in back and front ends of a web application. I have a web application (Rails + heavy JS) that parses metadata from HTML pages fetched via a user supplied URL (think Pinterest or Instapaper). Currently this processing takes place exclusively on the client-side. The code that fetches the URL and parses the DOM is in a fairly large set of JS scripts in our Rails app. Occasionally want to do this processing on the server-side of the app. For example, what if a user supplied a URL but they have JS disabled or have a non-standard compliant browser, etc. Ideally I'd like to be able to process these URLS in Ruby on the back-end (in asynchronous background jobs perhaps) using the same logic that our JS parsers use WITHOUT porting the JS to Ruby. I've looked at systems that allow you to execute JS scripts in the backend like execjs as well as Ruby-to-Javascript compilers like OpalRB that would hopefully allow "write-once, execute many", but I'm not sure that either is the right decision. Whats the best way to avoid business logic duplication for apps that need to do both client-side and server-side processing of similar data?

    Read the article

  • System broken after installing Gtk+-3.4.1 with broadway backend enabled

    - by Roman D. Boiko
    I am running Ubuntu 11.10 from VirtualBox. I installed Gtk+ 3.4.1 (latest stable release) from sources with X11 and broadway backends enabled. In order to do that, I also installed latest versions of glib, libffi, libtiff, libjped, gdk-pixbuf, and pango. Each of them was configured with default options. I.e., they were installed to /usr/local (at least, I see respective folders in /usr/local/include). After reboot and login (regardless which user), desktop is grey for about 30 sec, nothing is displayed. Then Nautilus starts, but nothing else (my locale is Ukrainian, but there is nothing important in text): . During boot, I can access command prompt as root, use dpkg, etc. But I don't know what to do. One idea is to reinstall Gtk+ and other libraries with prefix /usr or /usr/shared. I will try that, but it is quite time-consuming, so any ideas would be welcome. Reverting to earlier snapshot is still possible, but it is 6 days old and I would like to try to solve the problem.

    Read the article

  • loading splash screen takes priority over terminal or windows manager while running elsa

    - by schonjones
    I recently installed e17 and was trying to set up defaults to use elsa and ecomorph over the standard compiz as it constantly crashes since updating to 12.04. If elsa is installed the loading screen hangs and never loads to login, however i can get to a terminal or the e17 login instead of the standard gdm that usually shows up, within a second the screen goes back to the loading screen. I can still type and login as well as run commands in the terminal, but all I see is the loading screen. Switching between terminals i can confirm my commands before it switches back to the loading screen. If i remove elsa the loading screen hangs, but I can get to a terminal login and run lightdm to start my session with no problems. I have multiple DE installed and am unsure which loading screen is coming up. i think it's the KDE screen, grub comes up with a debian background if that helps. I'm not sure if i can switch the loading screen and resolve this issue or if i'm just going to have to scrap using elsa and get lightdm to load on boot again. Elsa would be my preference. I don't have the space to backup my files for a complete reinstall. Please help!

    Read the article

  • Unity Dashboard won't find local files, rearrange icons on two computers

    - by Stanton.Sculpture
    Suddenly I can't move icons around my unity launcher and the Dash won't search for my local files and folders. Was working when I first installed 13.10, but now it won't search for local files, and it won't let me rearrange the icons in any way. I've tried turning on and off all the scopes (lenses?) in multiple combinations, but it won't find any files unless I use nautilus to find them its mostly unresponsive. I can't see my recently used files, or files and folders scope at all. Dragging and dropping the icons on the side dock doesn't work, they only stick to my mouse until I put them back where they were. I cannot unlock any icons from the launcher, it just doesn't do anything when I click it. I tried rebooting both of my computers and its still won't function normally. I used ubuntu-bug -w to report a bug, no one has gotten back to me. Is there some option that I changed to cause this? This is a problem on both my laptop and Desktop. Please Help, Alex

    Read the article

  • How to automatically render all opaque meshes with a specific shader?

    - by dsilva.vinicius
    I have a specular outline shader that I want to be used on all opaque meshes of the scene whenever a specific camera renders. The shader is working properly when it is manually applied to some material. The shader is as follows: Shader "Custom/Outline" { Properties { _Color ("Main Color", Color) = (.5,.5,.5,1) _OutlineColor ("Outline Color", Color) = (1,0.5,0,1) _Outline ("Outline width", Range (0.0, 0.1)) = .05 _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} } SubShader { Tags { "Queue"="Overlay" "RenderType"="Opaque" } Pass { Name "OUTLINE" Tags { "LightMode" = "Always" } Cull Off ZWrite Off // Uncomment to show outline always. //ZTest Always CGPROGRAM #pragma target 3.0 #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float4 pos : POSITION; float4 color : COLOR; }; float _Outline; float4 _OutlineColor; v2f vert(appdata v) { // just make a copy of incoming vertex data but scaled according to normal direction v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); float3 norm = mul ((float3x3)UNITY_MATRIX_IT_MV, v.normal); float2 offset = TransformViewToProjection(norm.xy); o.pos.xy += offset * o.pos.z * _Outline; o.color = _OutlineColor; return o; } float4 frag(v2f fromVert) : COLOR { return fromVert.color; } ENDCG } UsePass "Specular/FORWARD" } FallBack "Specular" } The camera used fot the effect has just a script component which setups the shader replacement: using UnityEngine; using System.Collections; public class DetectiveEffect : MonoBehaviour { public Shader EffectShader; // Use this for initialization void Start () { this.camera.SetReplacementShader(EffectShader, "RenderType=Opaque"); } // Update is called once per frame void Update () { } } Unfortunately, whenever I use this camera I just see the background color. Any ideas?

    Read the article

  • Lubuntu 12.04 on Acer laptop boots to blank blue screen

    - by WGCman
    My previous question on this was closed, but I am posting it again as the solution which my son eventually found may assist other users of the forum, or someone may be able to tweak the solution to improve the performance. Having installed Kubuntu 12.04.01 from a live USB onto my desktop, I wanted to do the same on my laptop, an Acer Aspire 1362 Laptop, which has 256MB RAM (actually 512 "on the box", but a good deal can be borrowed by the graphics!). I found Kubuntu wouldn't run on so little memory but downloaded: Lubuntu-12.04-alternate-i386.iso, which I understood was light enough to go. The laptop has one internal 40GB Toshiba hard drive divided into 3 partitions: C,19GB with Windows XP, Windows program files and some data, D, 19GB mostly data, and a small 2GB partition with some Acer software, which XP can't normally “see”. I transferred most of the contents of D to a memory stick, leaving 16GB free for Lubuntu. I did not want to dump XP yet, though it is painfully slow. I installed Lubuntu from then USB stick, accepting the default answers to most of the questions. The D: partition was further partitioned into a 500MB boot partition, 10GB for Linux, 2GB Swap and 6GB for data shareable between Linux and Windows. I had no error messages during installation, rebooted, was offered the choice of Ubuntu or XP, and selected the former. After a few minutes, I get a dark blue screen announcing Lubuntu with five dots underneath which lighten in turn. Eventually the lights stopped, and whatever I try the screen remains blank apart from “Lubuntu” I tried several solutions suggested on the forum for “identical” questions but without success.

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >