Search Results

Search found 17976 results on 720 pages for 'old versions'.

Page 419/720 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • Facebook App EULA & Restrictions: What can't they do that my web app can?

    - by Adam Tannon
    I have written a nifty little web app (in Java/GWT/JS) and have been experimenting with the idea of making it available through Facebook as a Facebook App as well. After spending some time reading Facebook's developer docs, it seems like I can just create a Facebook App to point at any URL I want and use that as the app/canvas. It accomplishes this via iframes. So, my tentative plan is to just point it towards my (existing) web app so that I don't have to totally re-write it. But then that got me thinking: Facebook must regulate what sorts of things can be done through a Facebook App, vs. what an app can't do. For instance, I can't imagine I can point a Facebook App to point at a URL for a web app that accepts e-commerce payments (that would by-pass Facebook altogether and not allow them to take a cut from the ecom transaction!). Also, I can't imagine that Facebook allows developers to point their Facebook Apps to just any old URL without some sort of a scan, otherwise that would open Facebook up to the horrors of every security threat knownst to humanity. I know for a fact that when you write an iOS native app and put it up on the Apple App Store, that Apple actually scans your source code for violations of their EULA. So my question: does Facebook do the same? If so, what are their terms & conditions for what a Facebook app can/can't do? Suprisingly, I can't find this anywhere!! Thanks in advance!

    Read the article

  • How can I connect my USB (HP) printer in 10.4, which can't be discovered and worked in 9.x

    - by Brian
    My printer was working under 9.x. It is a an hp photosmart C3100 series. When I open the Admin- printing. no local printers are found. I try to add via other (My local choices are Serial and other). I have tried many uri's - ipp://localhost:631/ipp, http://localhost/ipp, localhost, 127.0.0.1, etc... None have worked. Under the networked I have tried JetDirect, using localhost and 127.0.0.1 and port 631. I have tried many options under IPP with different variants in the host trying to verify a printer. No luck. I tried LPD/LPR with localhost and tried the probe. no luck. I tried the cups admin via localhost:631 and that didn't work. On the old version its simply found the local printer, I might have picked the driver, I can't remember but it was the photosmart c3100 series that was working. I just can't get 10.4 to print.

    Read the article

  • Embedded Nashorn in JEditorPane

    - by Geertjan
    Here's a prototype for some kind of backoffice content management system. Several interesting goodies are included, such as an embedded JavaScript editor, as can be seen in the screenshot: Key items of interest in the above are as follows: Embedded JavaScript editor (i.e., the latest and greatest Nashorn technology, look it up, if you're not aware of what that is.) The way that's done is to include the relevant JavaScript modules in your NetBeans Platform application. Make very sure to include "Lexer to NetBeans Bridge", which does a bunch of critical stuff under the hood. The JEditorPane is defined as follows, along the lines that I blogged about recently thanks to Steven Yi: javaScriptPane.setContentType("text/javascript"); EditorKit kit = CloneableEditorSupport.getEditorKit("text/javascript"); javaScriptPane.setEditorKit(kit); javaScriptPane.getDocument().putProperty("mimeType", "text/javascript"); Note that "javaScriptPane" above is simply a JEditorPane. Timon Veenstra's excellent solution for integrating Nodes with MultiViewElements, which is described here by Timon, and nowhere else in the world. The tab you see above is within a pluggable container, so anyone else could create a new module and register their own MultiViewElement such that it will be incorporated into the editor. A small trick to ensure that only one window opens per news item: @NbBundle.Messages("OpenNews=Open") private class OpenNewsAction extends AbstractAction { public OpenNewsAction() { super(Bundle.OpenNews()); } @Override public void actionPerformed(ActionEvent e) { News news = getLookup().lookup(News.class); Mode editorMode = WindowManager.getDefault().findMode("editor"); for (TopComponent tc : WindowManager.getDefault().getOpenedTopComponents(editorMode)) { if (tc.getDisplayName().equals(news.getTitle())) { tc.requestActive(); return; } } TopComponent tc = MultiViews.createMultiView("application/x-newsnode", NewsNode.this); tc.open(); tc.requestActive(); } } The rest of what you see above is all standard NetBeans Platform stuff. The sources of everything you see above is here: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/CMSBackOffice

    Read the article

  • Building own kernel on ubuntu

    - by chris
    Hi, I'm trying to build my own kernel, as I want to write a kernel program which I need to compile into the kernel. So what did I do? Download from kernel.org, extract, do the make menuconfig and configure everything as needed, do a make, do a make modules_install, do a make install and finally do a update-grub. Result: It doesn't boot at all.... Now I had a look here and it describes a different way of compiling a kernel. Could this be the reason whz my way did not work? Or does anyone else have an idea why my kernel doesn't work? ######## Edit Great answer, ty. Oli. But I tried it the old fashioned way, and after one hour of compiling I got this message: install -p -o root -g root -m 644 ./debian/templates.master /usr/src/linux-2.6.37.3/debian/linux-image-2.6.37.3meinsmeins/DEBIAN/templates dpkg-gencontrol -DArchitecture=i386 -isp \ -plinux-image-2.6.37.3meinsmeins -P/usr/src/linux-2.6.37.3/debian/linux-image-2.6.37.3meinsmeins/ dpkg-gencontrol: error: package linux-image-2.6.37.3meinsmeins not in control info make[2]: *** [debian/stamp/binary/linux-image-2.6.37.3meinsmeins] Error 255 make[2]: Leaving directory `/usr/src/linux-2.6.37.3' make[1]: *** [debian/stamp/binary/pre-linux-image-2.6.37.3meinsmeins] Error 2 make[1]: Leaving directory `/usr/src/linux-2.6.37.3' make: *** [kernel-image] Error 2

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • can't get a good install:11.10 server

    - by jack
    I screwed up my partitioning aparently tring to get lvm and raid1 going. the machine is an intel dual core dt with 2 gig of ram and 2 sata drives, one 250g and the other 500g. This a build for my school in n.e. Thailand. we have 20+ clients now, a website, email. Our old server is dying fast and we are going to add another 12 stations next week. I really need some help here! 1. have onboard gigabit ethernet that aparently uses same driver as realtek 811c. I installed a pcie gigabit card also 811c. At several points the eth0 has accessed the internet fine, but the eth1 will not communicate. 2. I saw a "fix" for this online which from root: rmmod r8169. this imediately killed the working onboard card. 3.I tried to re-install 11.10 figuring that would re-install r8169. However I messed something up in my partitioning and can't get a clean boot now. 6. so I think after 12 re-installs or so and 2 days. I can get through it right if I can start over with clean drives, but I can't figure out how to empty them out what with soft raid and lvm partitions. seems like i've had it going well and then trying to fix that one little problem, i go backwards.Please help! please send email.-thanks

    Read the article

  • Multiple URL's going to same page - Kosher for Google?

    - by Ashoka15
    I hear conflicting answers from people about this, and I'm a developer by trade, and my SEO knowledge is not what it should be. Here's my situation: I run a website that lists hotels, restaurants, bars, shops, etc for a small Asian beach town. Lots of establishments here are hotels with a restaurant and bar, as well as restaurants that are also bars. As en example, a Mexican restaurant that also functions as a full cocktail bar. I first set it up so each establishment has one page, but can create multiple pages based on their other areas of business. This forces people to create TWO listings under the same name, and most just add the exact same information onto each page, making things redundant. I am re-arranging the database so that a establishment has only ONE listing (one unique page referenced by the unique code '12345ABCDEF') that is accessible from browsing under "Restaurants" and "Bars", and has the URL structures: site.com/dining/mexican/12345ABCDEF/business-name.html site.com/bars/cocktail_bars/12345ABCDEF/business-name.html I could easily simplify the URL to just the unique code and name: site.com/12345ABCDEF/business-name.html But, I found that Google has parsed by URL structure and lists like this on their SERP: Home > Dining > Mexican With each pointing to the default page for homepage, restaurants and Mexican restaurants. If I simplify the URL structure, will I lose these associations? Could Google also be picking up this structure from my breadcrumb trail at the top of the page? What is the best way to set up URL's on these pages so I am not penalized by Google for having identical information on two URL's, while still being able to have places show up as they did with the old system?

    Read the article

  • Using HTML5 Today part 4&ndash;What happened to XHTML?

    - by Steve Albers
    This is the fourth entry in a series of descriptions & demos from the “Using HTML5 Today” user group presentation. For practical purposes, the original XHTML standard is a historical footnote, although XHTML transitional will probably live on forever in the default web page templates of old web page editors. The original XHTML spec was released in 2000, on the heels of the HTML 4.01 spec.  The plan was to move web development away from HTML to the more formal, rigorous approach that XHTML offered, but it was built on a principle that conflicts with the history and culture of the Internet: XHTML introduced the idea of Draconian Error Handling, which essentially means that invalid XML markup on a page will cause a page to stop rendering. There is a transitional mode offered in the original XHTML spec, but the goal was to move to D.E.H.  You can see the result by changing the doc type for a document to “application/xhtml+xml” - for my class example we change this setting in the web.config file: <staticContent> <remove fileExtension=".html" /> <mimeMap fileExtension=".html" mimeType="application/xhtml+xml" /> </staticContent> With the new strict syntax a simple error, in this case a duplicate </td> tag, can cause a critical page error: While XHTML became very popular in the ensuing decade, the Strict form of XHTML never achieved widespread use. Draconian Error Handling was one of the factors that led in time to the creation of the WHATWG, or Web Hypertext Application Technology Group.  WHATWG contributed to the eventually disbanding of the XHTML 2.0 working group and the W3C’s move to embrace the HTML5 standard. For developers who long for XML markup the W3C HTML5 standard includes an XHTML5 syntax. For the longer, more definitive look at what happened to XHTML and how HTML5 came to be check out the Dive Into HTML mirror site or Bruce Lawson’s “HTML5: Who, What, When Why” talk.

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Telerik Extensions for ASP.NET MVC Q1 2010 is out!

    Today we shipped the Q1 2010 release out of the door. Go download the open source or if you are a licensed customer download it from your client.net account.   What is new on the MVC front is: No longer in BETA New components TreeView, NumericTextBox components, Calendar, DatePicker New features Grid grouping, Grid editing, Grid localization Using jQuery 1.4.2 Lots of bug fixes   The rest is mentioned in the release notes.   Breaking changes from Q1 2010 Futures!!! There is one breaking change since the Q1 2010 Futures release. The Toolbar method of the GridBuilder has been renamed to ToolBar: <%= Html.Telerik().Grid(Model) //.Toolbar(commands => commands.Insert()) <- Old .ToolBar(commands => commands.Insert()) // <- New%> For a complete list of changed API of the grid check the changes and backward compatibility help topic.   By the way if you still havent cast your vote for a new product or feature do it now! We will soon start development for Q2 2010.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Telerik Extensions for ASP.NET MVC Q1 2010 is out!

    Today we shipped the Q1 2010 release out of the door. Go download the open source or if you are a licensed customer download it from your client.net account.   What is new on the MVC front is: No longer in BETA New components TreeView, NumericTextBox components, Calendar, DatePicker New features Grid grouping, Grid editing, Grid localization Using jQuery 1.4.2 Lots of bug fixes   The rest is mentioned in the release notes.   Breaking changes from Q1 2010 Futures!!! There is one breaking change since the Q1 2010 Futures release. The Toolbar method of the GridBuilder has been renamed to ToolBar: <%= Html.Telerik().Grid(Model) //.Toolbar(commands => commands.Insert()) <- Old .ToolBar(commands => commands.Insert()) // <- New%> For a complete list of changed API of the grid check the changes and backward compatibility help topic.   By the way if you still havent cast your vote for a new product or feature do it now! We will soon start development for Q2 2010.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • how to properly Install chromium from zip and make it the default browser

    - by ClarifyLinux
    Since the Chromium PPA is no longer maintained, for those of us preferring to use chromium over chrome, we have two options: Build and Install from Source Download either 'beta' or daily builds (in a zip file) Unfortunately for me, option 1 is overly complicated. I know how to compile most any other applications in Ubuntu but I've never been able to get chromium to build correctly. I am currently using option 2. In Chromium I have the Chromium Updater installed (http://goo.gl/ffAMy). This gives me quick access to the most recent 64bit versions. Once downloaded, I install to /home/myuser/opt/chrome-linux. From this directory I can run the chrome binary. It works perfectly except for the fact that I cannot get it to act as my default browser. I've tried, as root, installing the binary in /opt/chrome-linux/ with a symbolic link to the 'chrome' binary in /usr/bin. Unfortunately, this doesn't work as a non root user. So my question is - How do I properly install a downloaded chromium zip build so tht it's listed as an option for the default browser?

    Read the article

  • Today's Links (6/27/2011)

    - by Bob Rhubart
    2011 Entrepreneurs of the Year, Northern California Region Drake Martinet reports on the new batch of entrepreneurs joining the ranks of Oracle CEO Larry Ellison, Yahoo CEO Carol Bartz and eBay co-founder Pierre Omidyar as the Norther California Region winners of Ernst & Young's Entrepreneurs of the Year awards. Technical Article: Caching Strategies for Oracle Service Bus 11g William Markito Oliveira illustrates how the right caching strategy can make a big difference in application performance. Kscope 11 - Day 1 and 2 Oracle ACE Director Markus Eisele checks in from Long Beach. Kaleidoscope 2011: Sunday’s Symposium And so does Oracle ACE Director Marco Gralike. Yet another GlassFish 3.1.1 promoted build | The Aquarium "This version was carefully designed to be highly compatible with the previous 3.x versions," says Alexis, "thus leaving you with little reasons not to upgrade as soon as it comes out this summer." Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now! "The NoSQL databases are not intended to be a replacement for the mainstream RDBMS," says Arun Gupta. I have a performance problem | Alan Hargreaves Good (and entertaining) advice from an Australian Solaris and Network Domain TSC* Principal Field Technologist.

    Read the article

  • Ubuntu 12.04 desktop, VNC viewer not refreshing screen

    - by user73279
    I've had this issues across multiple machines and multiple versions of Ubuntu desktop (all 10.04 or later). Usually it happens with an old laptop I've put Ubuntu on but now it's happening on my primary dev machine (a quad-core PC recently upgraded to Ubuntu 12.04 desktop). The problem is this - I can connect to the machine and login with the password, the initial screen looks fine but never refreshes. I can see the monitor for the machine across the room and can see the mouse move and the menus pop up but the image of the screen on the PC in front me running the VNC viewer never updates. So the mouse and keyboard commands are working. Ubuntu 12.04 Desktop Ultra VNC Viewer (also seen with RealVNC's free VNC viewer) Desktop Sharing Static IP on eth0; Dynamic ID on eth1 I think it is an Ubuntu config issue because this PC used to work just fine with 9.04, 10.04, and 11.10 (over the past couple of years). I've also had a couple of laptops that used to have this issue with older Ubuntu's but don't with 12.04. Additional info: The Win7 PC I'm trying to use to control the Ubuntu PC is connected via 2 DLink 8-port gigabit routers. The Ubuntu laptop I usually control via VNC is typically only connected to the network via wireless. The screen refresh is choppy but usable. I've repeated the issue on a Win7 laptop which was connected via ethernet and wireless.

    Read the article

  • X-notifier doesn't work in Chromium Browser

    - by cipricus
    It just keeps checking in vain. Also cannot import or export data, but get this error I use the latest versions of both in Lubuntu 12.04. In Google Chrome it works. What could it be the problem? Edit - following vasa1's comment - running sudo aa-status i get apparmor module is loaded. 16 profiles are loaded. 16 profiles are in enforce mode. /sbin/dhclient /usr/bin/evince /usr/bin/evince-previewer /usr/bin/evince-previewer//launchpad_integration /usr/bin/evince-previewer//sanitized_helper /usr/bin/evince-thumbnailer /usr/bin/evince-thumbnailer//sanitized_helper /usr/bin/evince//launchpad_integration /usr/bin/evince//sanitized_helper /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/connman/scripts/dhclient-script /usr/lib/cups/backend/cups-pdf /usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper /usr/sbin/cupsd /usr/sbin/ntpd /usr/sbin/tcpdump 0 profiles are in complain mode. 3 processes have profiles defined. 3 processes are in enforce mode. /sbin/dhclient (1562) /usr/sbin/cupsd (916) /usr/sbin/ntpd (1695) 0 processes are in complain mode. 0 processes are unconfined but have a profile defined.

    Read the article

  • Glue Records creation

    - by FFrewin
    I need some information on the following issue, as I would like to have it clear on my mind. I have a VPS server. All my sites hosted on this VPS are using as NameServer .gr domain, like ns1.greekdomain.gr & ns2.greekdomain.gr . The .gr domain name is a domain I own with a greek registar. Now, I want to move 2 websites with .co.uk domain names to my VPS. The co.uk domain names are registered with a UK based registar. When I went in the domain management panel, I did changed the nameservers of my domains to my ns.greekdomain.gr ns. However the panel returns an error about invalid nameservers. After digging, I found that my nameservers are not valid because they do not exist as records in the .co.uk registry. And here it starts my big trouble. The .co.uk registart tells me that I have to ask my hosting provider / .gr registar to create a new record to the .uk registry for my nameservers. The .gr registar tells me that my uk registar needs to create a new record for my ns. From Nominet (.co.uk) registry, the one employee tells me that I need to ask my uk registar, the other employee (seemed to not understand what I was asking) told me that they cannot change my nameservers for me, and she told me to contact anyone else (old hosting provider, uk registar, .gr registar) to help me with that. I can't find help from nobody. I try since the last week to transfer my websites to my VPS and I can't. So, the question is who is responsible and who is able to create glue records for my nameservers ?

    Read the article

  • Can't install the wireless driver in HP Pavilion dv2419us

    - by maqtanim
    I've just installed Ubuntu 13.04 in an old HP Pavilion dv2419us. The problem is, Ubuntu doesn't detect the wireless card. But it works fine in Windows 7. The following command returns nothing! lspci -vvnn | grep 14e4 And the lspci output is: 00:00.1 RAM memory: NVIDIA Corporation C51 Memory Controller 0 (rev a2) 00:00.2 RAM memory: NVIDIA Corporation C51 Memory Controller 1 (rev a2) 00:00.3 RAM memory: NVIDIA Corporation C51 Memory Controller 5 (rev a2) 00:00.4 RAM memory: NVIDIA Corporation C51 Memory Controller 4 (rev a2) 00:00.5 RAM memory: NVIDIA Corporation C51 Host Bridge (rev a2) 00:00.6 RAM memory: NVIDIA Corporation C51 Memory Controller 3 (rev a2) 00:00.7 RAM memory: NVIDIA Corporation C51 Memory Controller 2 (rev a2) 00:02.0 PCI bridge: NVIDIA Corporation C51 PCI Express Bridge (rev a1) 00:03.0 PCI bridge: NVIDIA Corporation C51 PCI Express Bridge (rev a1) 00:05.0 VGA compatible controller: NVIDIA Corporation C51 [GeForce Go 6150] (rev a2) 00:09.0 RAM memory: NVIDIA Corporation MCP51 Host Bridge (rev a2) 00:0a.0 ISA bridge: NVIDIA Corporation MCP51 LPC Bridge (rev a3) 00:0a.1 SMBus: NVIDIA Corporation MCP51 SMBus (rev a3) 00:0a.3 Co-processor: NVIDIA Corporation MCP51 PMU (rev a3) 00:0b.0 USB controller: NVIDIA Corporation MCP51 USB Controller (rev a3) 00:0b.1 USB controller: NVIDIA Corporation MCP51 USB Controller (rev a3) 00:0d.0 IDE interface: NVIDIA Corporation MCP51 IDE (rev f1) 00:0e.0 IDE interface: NVIDIA Corporation MCP51 Serial ATA Controller (rev f1) 00:10.0 PCI bridge: NVIDIA Corporation MCP51 PCI Bridge (rev a2) 00:10.1 Audio device: NVIDIA Corporation MCP51 High Definition Audio (rev a2) 00:14.0 Bridge: NVIDIA Corporation MCP51 Ethernet Controller (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 05:09.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller 05:09.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 19) 05:09.2 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 0a) 05:09.3 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 05) The command lspci -nn | grep 0280 gives no output. Any suggestion regarding this?

    Read the article

  • Cannot Boot, How to recover

    - by Kendor
    Am running 11.10 64-bit with Gnome-shell. Something happened late Friday whereby my machine never gets to the login screen. I do get to an Ubuntu splash logo, after that I get a text screen that it hangs on. The screen is referring to issues with mounting various network resources, including VMWare and also some references to my NAS that are in fstab. If I hit "esc" I can get to the GRUB menu and into recovery console. If I try to do a file system check, I run into a similar error screen that I see when trying to boot normally. A possible clue here is that during my last good session I made some mods to the /etc/hosts file to reference another system which I'm connecting to with Synergy. I don't believe I have hardware issues as I'm able to boot properly with a Live USB and connect to my network/Internet. A few more tidbits. I have regular Dejadups backups on my NAS. I have a good Clonezilla whole drive image which is 4-6 weeks old.. My home is encrypted. I thought I'd try blowing away my hosts file via live USB, but when I mounted the hard drive everything was read-only and I couldn't figure out how to replace it. P.S. I logged in via CLI and modded the host file to remove the entry I'd made, to no avail. System continue to gets stuck on the following: CIFS VFS: default security mechanism requested. The default security mechanism will be upgraded from ntlm to ntlmv2 in kernel version 3.1s Would love some sober advice on how to attack this.

    Read the article

  • How do I know if I'm getting the most out of my video card?

    - by b.long
    My computer at home is a bit lacking, so I want to make sure I'm getting the most out of it while I can. Generally speaking, here are the specs: 4GB Memory AMD Athlon(tm) 64 X2 Dual Core Processor 5200+ × 2 64-bit Ubuntu The terminal shows me the following: me@home:~$ uname -a Linux home 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux me@home:~$ lspci | grep VGA 01:00.0 VGA compatible controller: ATI Technologies Inc RV380 [Radeon X600 (PCIE)] me@home:~$ sudo lshw -C video *-display:0 description: VGA compatible controller product: RV380 [Radeon X600 (PCIE)] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:44 memory:e0000000-efffffff ioport:ac00(size=256) memory:fdef0000-fdefffff memory:fdec0000-fdedffff *-display:1 UNCLAIMED description: Display controller product: RV380 [Radeon X600] vendor: ATI Technologies Inc physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress bus_master cap_list configuration: latency=0 resources: memory:fdee0000-fdeeffff me@home:~$ lspci -nn | grep VGA 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc RV380 [Radeon X600 (PCIE)] [1002:5b62] The additional drivers menu in System Settings shows me nothing useful and my attempt at installing ATI's Catalyst Control center (drivers that came with the video card) failed. I believe the latest version of Ubuntu at the time was 9.x. What should I do? Install an old version of Ubuntu 9? Use some alternative driver? UPDATE: I might try my hand at a bit from this answer next: "Installing Catalyst Manually (from AMD/ATI's site)" . From a terminal, fgl_glxgears returns *"fgl_glxgears: command not found"*. Any thoughts?

    Read the article

  • 12.10 no audio via hdmi and video speeds up

    - by jackson
    I have a laptop with an ati radeon 4200, on 12.04 everything worked fine, since upgrading to 12.10 I cannot get sound over the hdmi. When I switch to hdmi audio the video speeds up to about 2x. I can use the speakers in my laptop and watch video via hdmi with no problems. Things I have tried: Various tutorials to install the AMD/ATI drivers, all of which resulted in low graphics mode. Checked that everything is properly set in alsamixer, the sound utility and - installed pavucontrol and checked everything in there. Verified the output from cat /proc/asound/cards looks normal When I initially upgraded there was a plethora of problems which I believe were due to the old proprietary driver still being used but not compatible, after a few hours trying to fix that I decided just to back up and do a fresh install which works great except for the above stated problem. Any help would be greatly appreciated!! Finally hopefully this hasn't already been answered, I have tried a few different searches on the boards and haven't come up with anything. $ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: SB [HDA ATI SB], device 0: ALC269VB Analog [ALC269VB Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: HDMI [HDA ATI HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 0/1 Subdevice #0: subdevice #0

    Read the article

  • suggestions for lonewolf dev setup

    - by d33j
    I'm looking for some suggestions for a better development setup. Background: I'm a crusty old software engineer (mostly java of late) and I have around 50 - 100 incomplete java projects scattered everywhere, usb keys, HDDs, and spanning across 5 or 6 computers etc, which have been put on hold for a few years (ie: family). I have no version control at home. I've been using IntelliJ for around 10 years, so that's the only constant. I'm thinking of nominating one machine as a headless server to put all my projects on, maybe a ubuntu box, that way It won't matter which device I'm on, all my projects can be accessed (and I don't have to waste time actually looking for them). I don't need to access code over the net. These are my own 'happy place' projects so I only work on them when I'm at home, however I can see the benefit of the tasking app being online, that way if I think of something while on public transport lets say, I can add it then & there, but it's not a requirement. I can wait until I get home to create tasks. Summary: So I need some sort of version control so I can rollback mistakes, and some sort of simple tasking software where I can assign tasks for myself later on when I get time. I use Subversion, Sonar, Jira and Crucible at work but I think it's a little bit of an overkill for me though. What do you suggest?

    Read the article

  • Fatal X server error: Failed to submit to batchbuffer

    - by Jan
    Ubuntu 10.04 Lucid Lynx used to run fine on my computer. Since a few weeks, my X server crashes out of the blue while the computer is idle and I'm logged into a Gnome session. (I'm then greeted with a new GDM login prompt). After the crash, /var/log/gdm/:0.log.1 has the following: Fatal server error: Failed to submit batchbuffer: Input/output error Please consult the The X.Org Foundation support at http://wiki.x.org for help. ~/.xsession-errors.old has symptoms of X clinets dying: nm-applet: Fatal IO error 11 (Die Ressource ist zur Zeit nicht verfügbar) on X server :0.0. dmesg says: [191848.390081] [drm:i915_hangcheck_elapsed] ERROR Hangcheck timer elapsed... GPU hung [191848.390086] render error detected, EIR: 0x00000010 [191848.390088] IPEIR: 0x00000000 [191848.390090] IPEHR: 0x01800002 [191848.390091] INSTDONE: 0xffffffff [191848.390093] INSTPS: 0x8001e020 [191848.390095] INSTDONE1: 0xbfffffff [191848.390097] ACTHD: 0x0a47b014 [191848.390099] page table error [191848.390100] PGTBL_ER: 0x00000002 [191848.390103] [drm:i915_handle_error] ERROR EIR stuck: 0x00000010, masking [191848.390127] [drm:i915_do_wait_request] ERROR i915_do_wait_request returns -5 (awaiting 5617217 at 5617205) Is this a known problem that can be traced back to the X server from Ubuntu repositories? How would I debug this? Edit: There's a relevant bug on LP.

    Read the article

  • SharePoint Saturday Huntsville Wrap Up

    - by Mark Rackley
    So, Cathy Dew (@catpaint1) and company put on a great SharePoint Saturday event this past weekend. I got to hang out with some old friends and meet some new ones. I’d list you all, but I’d undoubtedly miss someone and don’t want to offend anyone.  Although I find it odd that I see @MossLover now more since she moved to New Jersey than when she lived next door in Kansas City… what’s up with that? Anyway, Cathy did a tremendous job organizing the event.  Everything went smoothly and everyone had a great time. Maybe I can talk her into organizing the rest of SharePoint Saturday Ozarks on June 12th… you know that’s coming up? right? While you’re here why not go ahead and register right now at: http://spsozarks.eventbrite.com/  Yes.. that was a shameless plug… I did my default presentation on “Wrapping Your Head Around the SharePoint Beast”. This continues to be my most popular presentation. I try to tweak it every time and I always have fun doing it. I get to pick on people and they pick on me back, but I always manage to learn something new when I present it. I had a great interactive crowd and they didn’t throw anything at me.  All in all I consider it a success.  Thanks for coming if you attended!  You can get the slides here:  SharePoint Saturday Huntsville - Wrapping Your Head Around the SharePoint Beast Next up for me is SharePoint Saturday DC on May 15th.  Wow this is going to be a huge event with space for 1500 attendees.. no, that is not a typo!  Stop me and say hi if you are able to make it!!

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • Commands in Task-It - Part 1

    Download Source Code NOTE: To run the source code provided your will need to update to the RC (release candidate) versions of Silverlight 4 and VisualStudio 2010. In recent blog posts, like my MVVM post, I used Commands to invoke actions, like Saving a record. In this rather simplistic sample I will talk about the basics of Commands, and in my next post will get deeper into it. What is a Command? I remember the first time a UI designer used the word "command" I wasn't really sure what she was referring to. I later realized that it is just a term that is used to represent some UI control that can invoke an action, like a Button, HyperlinkButton, RadMenuItem, RadRadioButton, etc. Why should we use Commands? I'm sure you're familiar with the code behind approach of handling events. For example, if you had a Button and a RadMenuItem that ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >