Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 387/503 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • How to get my IR remote to work? Lirc can't see it

    - by user1234567
    I'm using Ubuntu 11.10 (amd64) and I'm trying to get my infrared remote control working. The IR device is a part of a DVB-T USB stick (Based on a RTL2832u chip). I'm using these drivers - it's the only way of getting this device to work under 11.10 that I found. It's a big impromevent from previous Ubuntu version, where I had to edit the driver's code. The device works quite great - and the IR part of it works, too. The driver's page says that the code it's in alpha stage, but I'm pretty sure that my issue has nothing to do with that. If, and only if driver's module is loaded with parameter rtl2832u_rc_mode=2 (which means "use NEC protocol for IR") the remote kind of works, I can see this by running cat /dev/.. ../input6 - when I press a button, random letters appear. The remote works just like a keyboard, but keys are totally messed up - when I press '5' the volume goes down, etc. I would like to use Lirc to fix that, but Lirc can't detect my device (i.e. irw shows nothing). I suspect, it's because something gets into control of the device and sets it up as a keyboard. Lirc seems to be working, it's KDE settings module works too, but it just doesn't detect the device. The Lirc page describes this issue, but since 2009 - the last year when that page was updated, Ubuntu moved from HAL (described there) to DeviceKit, rendering provided instruction useless. I had a similar issue with my previous remote, but the keys were not messed up so much - the remote was usable, so I gave up trying to get Lirc working. I tried the answer provided here, but it changed nothing. I also tried forcing lircd to use my device, but this didn't work too: for i in /sys/class/input/input* ; do echo -n "$(basename "$i"): "; cat "$i/name"; done shows input0: Power Button input1: Power Button input2: Logitech Logitech USB Keyboard input3: A4Tech PS/2+USB Mouse input6: IR-receiver inside an USB DVB receiver But when I run: lircd -n --device=name='IR*' as root (also tried with the full name) I always see: lircd-0.9.0[3983]: lircd(default) ready, using /var/run/lirc/lircd lircd-0.9.0[3983]: accepted new client on /var/run/lirc/lircd lircd-0.9.0[3983]: could not get file information for name=IR* lircd-0.9.0[3983]: default_init(): No such file or directory lircd-0.9.0[3983]: Failed to initialize hardware So, how to set up Lirc with devinput driver in such case?

    Read the article

  • Script For Detecting Availability of XMLHttp in Internet Explorer

    - by Duncan Mills
    Having the XMLHttpRequest API available is key to any ADF Faces Rich Client application. Unfortunately, it is possible for users to switch off this option in Internet Explorer as a Security setting. Without XMLHttpRequest available, your ADF Faces application will simply not work correctly, but rather than giving the user a bad user experience wouldn't it be nicer to tell them that they need to make some changes in order to use the application?  Thanks to Blake Sullivan in the ADF Faces team we now have a little script that can do just this. The script is available from https://samplecode.oracle.com here - The attached file browserCheck.js is what you'll need to add to your project.The best way to use this script is to make changes to whatever template you are using for the entry points to your application. If you're not currently using template then you'll have to make the same change in each of your JSPX pages. Save the browserCheck.js file into a /js/ directory under your HTML root within your UI project (e.g. ViewController)In the template or page, select the <af:document> object in the Structure window. From the right mouse (context) menu choose Facet and select the metaContainer facet.Switch to the source code view and locate the metaContainer facet. Then insert the following lines (I've included the facet tag for clarity but you'll already have that):      <f:facet name="metaContainer">        <af:resource type="javascript"                      source="/js/browserCheck.js"/>        <af:resource type="javascript">           xmlhttpNativeCheck(                     "help/howToConfigureYourBrowser.html");        </af:resource>      </f:facet>Note that the argument to the xmlhttpNativeCheck function is a page that you want to show to the user if they need to change their browser configuration. So build this page in the appropriate place as well. You can also just call the function without any arguments e.g. xmlhttpNativeCheck(); in which case it will pop up default instructions for the user to follow, but not redirect to any other page.

    Read the article

  • Create USB installer from the command line?

    - by j-g-faustus
    I'm trying to create a bootable USB image to install Ubuntu on a new computer. I have done this before following the "create USB drive" instructions for Ubuntu desktop, but I don't have an Ubuntu desktop available. How can I do the same using only the command line? Things I've tried: Create bootable USB on Mac OS X following the ubuntu.com "create USB drive" instructions for Mac: Doesn't boot. usb-creator: According to apt-cache search usb-creator and Wikipedia usb-creator only exists as a graphical tool. "Create manually" instructions at help.ubuntu.com: None of the files and directories described (e.g. casper, filesystem.manifest, menu.lst) exist in the ISO image, and I don't know what has replaced them. unetbootin scripting: Requires X server (graphics support) to run, even when fully scripted. (The command sudo unetbootin lang=en method=diskimage isofile=~/ubuntu-10.10-server-amd64.iso installtype=USB targetdrive=/dev/sdg1 autoinstall=yes gives an error message unetbootin: cannot connect to X server.) Update Also tried GRUB fiddling: Merging information from pendrivelinux.com a related question on the Linux Stackexchange and a grub configuration example I was able to get halfway there - it booted from USB, displayed the grub menu and started the installation, but installation did not complete. For reference, this is the closest I got: sudo su # mount USB pen mount /dev/sd[X]1 /media/usb # install GRUB grub-install --force --no-floppy --root-directory=/media/usb /dev/sd[X] # copy ISO image to USB cp ~/ubuntu-10.10-server-amd64.iso /media/usb # mount ISO image, copy existing grub.cfg mount ~/ubuntu-10.10-server-amd64.iso /media/iso/ -o loop cp /media/iso/boot/grub/grub.cfg /media/usb/boot/grub/ I then edited /media/usb/boot/grub.cfg to add an .iso loopback, example grub entry: menuentry "Install Ubuntu Server" { set gfxpayload=keep loopback loop /ubuntu-10.10-server-amd64.iso linux (loop)/install/vmlinuz file=(loop)/preseed/ubuntu-server.seed iso-scan/filename=/ubuntu-10.10-server-amd64.iso quiet -- initrd (loop)/install/initrd.gz } When booting from USB, this would give me the Grub boot menu and start the installer, but the installer gave up after a couple of screens complaining that it couldn't find the CD-ROM drive. (Naturally, as the box I'm installing on doesn't have an optical drive.) I resolved this particular issue by giving up and doing the "create USB drive" routine using the Ubuntu Live desktop CD (on a computer that does have an optical drive), then the USB install works. But I expect that there is some way to do this from the command line of an Ubuntu system without X server and without an optical drive, so the question still stands. Does anyone know how?

    Read the article

  • "Shared Folders" Feature Is Not Working In VirtualBox

    - by Islam Hassan
    I have Ubuntu 11.10 as a host and another linux 2.6 distribution as a guest. When I try to setup guest additions, this error message appears Building the shared folder support module .. fail And because of that, when I run the following in terminal mount -t vboxsf shared /root/shared I get the following error message mount: unknown filesystem type 'vboxsf' Any syggestions please? EDIT Sorry, the mentioned error message isn't complete, this is it. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) This is the content of vboxadd-install.log Uninstalling modules from DKMS Attempting to install using DKMS Creating symlink /var/lib/dkms/vboxguest/4.1.2/source -> /usr/src/vboxguest-4.1.2 DKMS: add Completed. Kernel preparation unnecessary for this kernel. Skipping... Building module: cleaning build area.... make KERNELRELEASE=3.2.6 -C /lib/modules/3.2.6/build M=/var/lib/dkms/vboxguest/4.1.2/build..........................(bad exit status: 2) Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxguest/4.1.2/build/ for more information. 0 0 ERROR: binary package for vboxguest: 4.1.2 not found Failed to install using DKMS, attempting to install without make KBUILD_VERBOSE=1 -C /lib/modules/3.2.6/build SUBDIRS=/tmp/vbox.0 SRCROOT=/tmp/vbox.0 modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/* WARNING: Symbol version dump /usr/src/linux-source-3.2.6/Module.symvers is missing; modules will have no dependencies and modversions. Actually the log file is very large and it exceeds the 30000 characters limit. How can I upload the entire log file here?

    Read the article

  • View Link inConsistency

    - by Abhishek Dwivedi
    What is View Link Consistency? When multiple instances (say VO1, VO2, VO3 etc) of an EO-based VO are based on the same underlying EO, a new row created in one of these VO instances (say VO1)can be automatically added (without re-query) to the row sets of the others (VO2, VO3 etc ). This capability is known as the view link consistency. This feature works for any VO for which it is enabled, regardless of whether they are involved in a view link or not. What causes View Link inConsistency? Unless jbo.viewlink.consistent  is disabled for this VO (or globally), or setAssociationConsistent(false) is applied, any of the following can cause View Link inConsistency.  1. setWhereClause 2. Unreferenced secondary EO 3. findByViewCriteria() 4. Using view link accessor row set Why does this happen - View Link inConsistency? Well, there can be one of the following reasons. a. In case of 1 & 2, the view link consistency flag is disabled on that view object. b. As far as 3 is concerned, findByViewCriteria is used to retrieve a new row set to process programmatically without changing the contents of the default row set. In this case, unlike previous cases, the view link consistency flag is not disabled, meaning that the changes in the default row set would be reflected in the new row set.  However, the opposite doesn't hold true. For instance, if a row is deleted from this new row set, the corresponding row in the default row set does not get deleted. In one of my features, which involved deletion of row(s), I resolved the view link inconsistency issue by replacing findByViewCriteria by applyViewCriteria. b. For 4, it's similar to 3 - whenever a view link accessor row set is retrieved, a new row set is created. Now, creating new row set does not mean re-executing the query each time, only creating a new instance of a RowSet object with its default iterator reset to the "slot" before the first row. Also, please note that this new row set always originates from an internally created view object instance, not one you that added to the data model. This internal view object instance is created as needed and added with a system-defined name to the root application module. Anyway, the very reason a distinct, internally-created view object instance is used is to guarantee that it remains unaffected by developer-related changes to their own view objects instances in the data model.

    Read the article

  • Installation issue after 4 attempts

    - by SixTen
    Have successfully installed Ubuntu 12.10 on 2 laptops, one running Vista & one running Windows8. Made 4 attempts (long downloads of WUBI.EXE) to different HD's & still NO GO. The machine is an older machine with Windows2000Professional installed & running. The system has 3 hard drives; C:(20.5 Gb with 7.26Gb Free), D: (74.5 Gb with 33.1 Gb Free), & E: (35.3 Gb with 24.8 Gb Free) which all have Gigabytes space available; also an A: 3 1/2floppy drive and a CD-drive burner. The CPU processor is older but seems sufficient: AMD Athlon XP 1700+ and the task manager of Windows2000 shows the processor works fine.. The flat-screen display works fine. Here is the error message I receive each time the 'installation configuration' is verified: "No root file system is defined" "Please correct this from partitioning menu" << The Ubuntu operating system is allowing me a couple of options at the very top right menu. I was able to establish a wireless connection but the MAIN homepage won't load with FIREFOX app or any other apps. I cannot access or even find any 'Partitioning Menu" from the displayed page. I cannot access files or Windows Explorer to view drives since I'm not using the Windows O/S. If I try to go back & re-install the UBUNTU 12.10 again, it always asks me to UNINSTALL the one found on the HD & then I run WUBI.EXE again which takes a long time for the download. Do I need to go back into Windows2000 & use Windows Explorer to look at the file structure & add a partition? On previous attempts I have tried loading the WUBI.EXE on all 3 HD's C: D: & E: Sure is frustrating?? Thanks for any suggestions. NEW UBUNTU user & what I've seen so far I like.. (J.R.)

    Read the article

  • How do I fix postfix TLS?

    - by Savanni D'Gerinel
    STARTTLS was working with my system earlier today. Without me altering the system in any way, it spontaneously broke. I've now been trying to fix it for a couple of hours, to no success. When I connect to the server, this is what I get: savanni@Orolo:~$ telnet apps.savannidgerinel.com 25 Trying 129.121.182.135... Connected to apps.sasavanni@Orolo:~$ telnet apps.savannidgerinel.com 25 Trying 129.121.182.135... Connected to apps.savannidgerinel.com. Escape character is '^]'. 220 *********************************************** ehlo dude 250-apps.savannidgerinel.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-XXXXXXXA 250-AUTH PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN ^]close telnet> close Connection closed. Okay, obviously STARTTLS isn't present in this list. So I've been digging through my configuration files and working through the tutorials again, and that has done me no good at all. Here's my tls-related configuration: smtp_tls_CAfile = /etc/ssl/certs/savannidgerinel_com_CA.pem smtp_tls_cert_file = /etc/ssl/certs/apps.savannidgerinel.com.pem smtp_tls_key_file = /etc/ssl/private/apps.savannidgerinel.com.key.pem smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_tls_CAfile = /etc/ssl/certs/savannidgerinel_com_CA.pem smtpd_tls_cert_file = /etc/ssl/certs/apps.savannidgerinel.com.pem smtpd_tls_key_file = /etc/ssl/private/apps.savannidgerinel.com.key.pem smtpd_tls_loglevel = 3 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom All of the certificate files are present, the server private key is present, the server CA is present, and the smtpd_scache.db and smtp_scache.db files are both present. All are accessible to the postfix user. Speaking of which, here are the processes running: savanni@apps:/var/lib/postfix$ ps aux | grep postfix root 3525 0.0 0.1 25112 1680 ? Ss 20:19 0:00 /usr/lib/postfix/master postfix 3526 0.0 0.1 27176 1524 ? S 20:19 0:00 pickup -l -t fifo -u -c -o content_filter= -o receive_override_options=no_header_body_checks postfix 3527 0.0 0.1 27228 1552 ? S 20:19 0:00 qmgr -l -t fifo -u postfix 3528 0.0 0.4 46948 4144 ? S 20:19 0:00 smtpd -n smtp -t inet -u -c -o stress= -s 2 postfix 3529 0.0 0.1 27176 1628 ? S 20:19 0:00 proxymap -t unix -u postfix 3530 0.0 0.3 38212 3176 ? S 20:19 0:00 tlsmgr -l -t unix -u -c postfix 3531 0.0 0.1 27176 1516 ? S 20:19 0:00 anvil -l -t unix -u -c postfix 3535 0.0 0.1 27188 1544 ? S 20:20 0:00 trivial-rewrite -n rewrite -t unix -u -c The log files say absolutely nothing related to TLS except for this: Nov 6 02:19:45 apps postfix/master[3525]: daemon started -- version 2.9.6, configuration /etc/postfix Nov 6 02:19:49 apps postfix/smtpd[3528]: initializing the server-side TLS engine Nov 6 02:19:49 apps postfix/tlsmgr[3530]: open smtpd TLS cache btree:/var/lib/postfix/smtpd_scache Nov 6 02:19:49 apps postfix/tlsmgr[3530]: tlsmgr_cache_run_event: start TLS smtpd session cache cleanup Nov 6 02:19:49 apps postfix/smtpd[3528]: connect from unknown[204.16.68.108] Neither syslog nor mail.err shows any indication of a problem. As far as the whole system is concerned, all is well. But there is no STARTTLS and so I suddenly can't send any email at all. Help???

    Read the article

  • Secret of SQL Trace Duration Column

    - by Dan Guzman
    Why would a trace of long-running queries not show all queries that exceeded the specified duration filter?  We have a server-side SQL Trace that includes RPC:Completed and SQL:BatchCompleted events with a filter on Duration >= 100000.  Nearly all of the queries on this busy OLTP server run in under this 100 millisecond threshold so any that appear in the trace are candidates for root cause analysis and/or performance tuning opportunities. After an application experienced query timeouts, the DBA looked at the trace data to corroborate the problem.  Surprisingly, he found no long-running queries in the trace from the application that experienced the timeouts even though the application’s error log clearly showed detail of the problem (query text, duration, start time, etc.).  The trace did show, however, that there were hundreds of other long-running queries from different applications during the problem timeframe.  We later determined those queries were blocked by a large UPDATE query against a critical table that was inadvertently run during this busy period. So why didn’t the trace include all of the long-running queries?  The reason is because the SQL Trace event duration doesn’t include the time a request was queued while awaiting a worker thread.  Remember that the server was under considerable stress at the time due to the severe blocking episode.  Most of the worker threads were in use by blocked queries and new requests were queued awaiting a worker to free up (a DMV query on the DAC connection will show this queuing: “SELECT scheduler_id, work_queue_count FROM sys.dm_os_schedulers;”).  Technically, those queued requests had not started.  As worker threads became available, queries were dequeued and completed quickly.  These weren’t included in the trace because the duration was under the 100ms duration filter.  The duration reflected the time it took to actually run the query but didn’t include the time queued waiting for a worker thread. The important point here is that duration is not end-to-end response time.  Duration of RPC:Completed and SQL:BatchCompleted events doesn’t include time before a worker thread is assigned nor does it include the time required to return the last result buffer to the client.  In other words, duration only includes time after the worker thread is assigned until the last buffer is filled.  But be aware that duration does include the time need to return intermediate result set buffers back to the client, which is a factor when large query results are returned.  Clients that are slow in consuming results sets can increase the duration value reported by the trace “completed” events.

    Read the article

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • Can't load vector font in Nuclex Framework

    - by ProgrammerAtWork
    I've been trying to get this to work for the last 2 hours and I'm not getting what I'm doing wrong... I've added Nuclex.TrueTypeImporter to my references in my content and I've added Nuclex.Fonts & Nuclex.Graphics in my main project. I've put Arial-24-Vector.spritefont & Lindsey.spritefont in the root of my content directory. _spriteFont = Content.Load<SpriteFont>("Lindsey"); // works _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes I get this error on the _testFont line: File contains Microsoft.Xna.Framework.Graphics.SpriteFont but trying to load as Nuclex.Fonts.VectorFont. So I've searched around and by the looks of it it has something to do with the content importer & the content processor. For the content importer I have no new choices, so I leave it as it is, Sprite Font Description - XNA Framework for content processor and I select Vector Font - Nuclex Framework And then I try to run it. _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes again I get the following error Error loading "Arial-24-Vector". It does work if I load a sprite, so it's not a pathing problem. I've checked the samples, they do work, but I think they also use a different version of the XNA framework because in my version the "Content" class starts with a capital letter. I'm at a loss, so I ask here. Edit: Something super weird is going on. I've just added the following two lines to a method inside FreeTypeFontProcessor::FreeTypeFontProcessor( Microsoft::Xna::Framework::Content::Pipeline::Graphics::FontDescription ^fontDescription, FontHinter hinter, just to check if code would even get there: System::Console::WriteLine("I AM HEEREEE"); System::Console::ReadLine(); So, I compile it, put it in my project, I run it and... it works! What the hell?? This is weird because I've downloaded the binaries, they didn't work, I've compiled the binaries myself. didn't work either, but now I make a small change to the code and it works? _. So, now I remove the two lines, compile it again and it works again. Someone care to elaborate what is going on? Probably some weird caching problem!

    Read the article

  • What tools exist for assessing an organisation's development capability?

    - by Eric Smith
    I have a bit of a challenge at work at the moment. Presently (and in fact, for some time now), we have been experiencing the following problems with some in-house maintained applications: Defects (sometimes quite serious) being released into production; The Customer (that is, the relevant business unit) perpetually changing their minds (or appearing to do so) about what issue to work on next; A situation where everyone seems to be in a "fire-fighting" mode a lot of the time; Development staff responding to operational requests from business users; ("operational" here means something that needs to be done in order to continue with business, or perhaps just to make a business user's life a little less painful, as opposed to fixing a bug in the application, or enhancing the application); Now I'm sure this doesn't sound particularly new or surprising to most of the participants on this Q&A site and no prizes for identifying the "usual suspects" when it comes to root causes. My challenge is that I have to persuade the higher-ups to do uncomfortable things in order to address all of this. The folk I need to persuade come from a mixture of the following two cultures: Accounting; IT Infrastructure. I have therefore opted for a strategy that draws from things with-which folk from such a culture would be most comfortable (at least, in my estimation), namely: numbers and tangibles. Of course modern development practitioners know all too well that this sort of thing isn't easily solved using an analytical mindset (some would argue that that mindset is, in fact, entirely inappropriate). Never-the-less, this is the dichotomy with-which I am faced, so that's the stake that I've put in the ground. I would like to be able to do research and use the outputs to present findings in the form of metrics and measures. I am finding it quite difficult, though, to find an agreed-upon methodology and set of templates for assessing an organisations development capability--the only thing that seems applicable is the Software Engineering Institute's Capability Maturity Model. The latter, however, seems dated and even then rather vague. So, the question is: Do any tools or methodologies (free or commercial) exist that would assist me in completing this assessment?

    Read the article

  • Executable execution path. Does it depends of the place the executable is called from?

    - by Valkea
    as I'm still a new Linux user, I still discover some behaviours and I'm unable to tell if they are "normals" or not. I searched the Internet but as I can't really find an answer I guess it's time to ask here. Few weeks ago I installed a small game called "Machinarium" and I played it... but few days later when I wanted to continue my game I was unable to make the game start correctly. And as I didn't had the time to search I given up. But yesterday as I was working on a program of mine, I had the exactly same behaviour. So I searched a bit and I discovered that when using Nautilus with the "List view", I was able to run the program (ie: the program does find the sound, images etc resources) when I was literally "inside" the executable folder, but unable when I was in a parent folder and expanding it to the executable folder to run it. To illustrate the behaviour here are two screen shots. It doesn't works if the executable is double clicked from here It does works if the executable is double clicked from here This is indeed the same "place", but the Nautilus view is slightly different as the current folder is not the same and it seems to make a difference for the program. Furthermore, when I create a menu items via System Settings/Main Menu to the executable, it behaves just like if the executable can't find the resources (that's why I was unable to play Machinarium the second time as I created a menu short-cut after my first game). So I asked my program to generate a text file at it's root when running, and I started to launch it from different "parent" folders to see where is generated the text file. Each time the file was generated on the top folder of the current Nautilus view. I was expected to see it appears in the same folder of the executable (well not as I was guessing what as happening, but before that I would have expected to see it appears in the exe folder). Does anyone can explain me why it does works like this (I guess it's normal) ? How I'm supposed to solve this when creating programs (Should I detect the executable path in my C++ code or should I organize the resources files another way than on windows ?)

    Read the article

  • Ubuntu Install 11.10 doesn't recognize Windows 7 installation with new HDD

    - by arlendo
    Replaced my crashed HDD with a Seagate 2TB Sata (bought from a company who pulled it from a working computer, OS unknown) and did a fresh install of Windows 7. Windows shows 100MB boot partition (bootable NTFS) and 200GB Windows partition (NTFS), the rest is unallocated. Win7 Disk Management says the partitioning type is Master Boot Record. Win7 boots and runs fine. Ubuntu 11.10 Install procedes to Allocate Drive Space screen and should say This computer currently has Windows 7 on it. What would you like to do? Instead, it says something like Install doesn't detect any existing OS on this computer. When I click on Something else, the partition table shows only the unallocated space of 1.8TB. Ubuntu Disk Utility says Partitioning: Master Boot Record, but GParted Live says Partition Table: gpt. It was my original intention to have the Windows boot partition and application partition, then install Ubuntu 11.10 using boot, root, swap, and home partitions, and maybe another partition just for data (mostly photos). Currently, I would be happy if I could just get Ubuntu installed along with Win7. I am aware of the MBR limits of 3 Primary partitions and 1 Extended partition. I suspect that my new HDD is partitioned for GPT and that is why Ubuntu can't see the Win7 installation. Am I on the right track? I was going to use Windows Disk Management to convert GPT to MBR but I only have the one drive on my AMD-64 mini-computer and it says I have to empty the drive of all partitions before I can access the Convert command. And I can't find any bootable software that would allow me to do that conversion. Here is the result of sudo fdisk -l: ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 224 heads, 19 sectors/track, 918004 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd4a68c18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 419637247 209715200 7 HPFS/NTFS/exFAT ubuntu@ubuntu:~$ Keep in mind that I'm a definite newbie to screwing around with the inner workings of Ubuntu. I previously had Ubuntu 10.04 running with Vista and I don't remember even having to partition anything that wasn't automatic in the install. Thanks for taking a look here. My Win7 is running fine but I miss my Ubuntu.

    Read the article

  • Becoming a "maintenance developer"

    - by anon
    So I've kind of been getting angry about the current position I'm in, and I'd love to get other developers' input on this. I've been at my current place of employment for about 11 months now. When I began, I was working on all new features. I basically worked on an entire new web project for the first 5-6 months I was here. After this, I was moved to more of a service oriented role (which was still great, all new stuff for me), and I was in this role for about the past 5-6 months. Here's where the problem comes in. Basically, a couple of days ago I was made the support/maintenance guy. Now, we have an IT support team, so I'm not talking that kind of support, I'm talking more of a second level support guy (when the guys on the surface can't really get to the root of the issue), coupled with working on maintenance issues that have been lingering in the backlog for a while. To me, a developer with about 3 years of experience, this is kind of disheartening. With the type of work place this is, I wouldn't be surprised if these support issues take up most of my days, and I barely make it to working on maintenance issues. Also, most of these support issues aren't even related to code, they are more or less just knowing the system architecture, working with making sure services are running/getting started properly, handling/fixing bad data, etc. I'm a developer, so this part sucks. Also, even when I do have time to work maintenance, these are basically just bug fixes/improving bad code, so this sucks as well, however at least it's related to coding. Am I wrong for getting angry here? I don't want to really complain about it, but to be honest, I wasn't spoken to about this or anything, I was kind of just sent an e-mail letting me know I'm the guy for this type of thing, and that was that. The entire team took a few minutes to give me their "that sucks" talk, because they know how annoying it is to be on support for the type of work we do, so I know I'm not the only guy that knows it's not that great of an opportunity. I'm just kind of on the fence about how to move forward. Obviously I'm just going to continue working for the time being, no point making a bad impression on anybody, but I'd like to know how you guys would approach this situation, or how you think I should be feeling about it/how you guys would feel. Thanks guys.

    Read the article

  • App_Offline.htm, taking site down for maintenance

    - by Vipin
    There is much simpler and graceful way to shut down a site while doing upgrade to avoid the problem of people accessing site in the middle of a content update.   Basically, if you place file with name 'app_offline.htm' with below contents in the root of a web application directory, ASP.NET will shut-down the application,  and stop processing any new incoming requests for that application.  ASP.NET will also then respond to all requests for dynamic pages in the application by sending back the content of the app_offline.htm file (for example: you might want to have a “site under construction” or “down for maintenance” message).   Then after upgrade, just rename/delete app_offline.htm file…and the site would be back to normal. Just remember that the size of the file should be greater than 512 bytes, doesn't matter even if you add some comments to it to push the byte size as long as it's of the size greater than 512 - it'll work fine.     <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html http://www.w3.org/1999/xhtml"http://www.w3.org/1999/xhtml" ><head>    <title>Maintenance Mode - Outage Message</title></head><body>    <h1>Maintenance Mode</h1>     <p>We're currently undergoing scheduled maintenance. We will come back very shortly.</p>     <p>Sorry for the inconvenience!</p>     <!--            Adding additional hidden content so that IE Friendly Errors don't prevent    this message from displaying (note: it will show a "friendly" 404    error if the content isn't of a certain size).        <h2>Site under maintenance...</h2>      <h2>Site under maintenance...</h2>      <h2>Site under maintenance...</h2>      <h2>Site under maintenance...</h2>      <h2>Site under maintenance...</h2>      <h2>Site under maintenance...</h2>      <h2>Site under maintenance...</h2>      <h2>Site under maintenance...</h2>         --></body></html>

    Read the article

  • How to generate xorg.conf? (X -configure segfaults)

    - by Nicolas Raoul
    My video card is working fine, I have no screen problem. I am trying to generate an xorg.conf so I did: [ Logout ] sudo service gdm stop [ Move away xorg.conf.back and xorg.conf.fglrx-0 that were in /etc/X11 ] sudo dpkg-reconfigure xserver-xorg sudo X -configure But this last command segfaults: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux nico 2.6.32-25-generic #44-Ubuntu SMP Fri Sep 17 20:26:08 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-25-generic root=UUID=7447ab16-3406-442d-81e5-bb6a2d795205 ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Fri Oct 15 16:06:11 2010 List of video drivers: i740 ark geode siliconmotion mach64 s3 r128 apm intel neomagic vesa trident chips s3virge fglrx sis savage rendition i128 tseng ztv mga openchrome radeon ati nv v4l vmware cirrus tdfx nouveau sisusb voodoo fbdev (EE) Can't load FireGL DRM library (libfglrxdrm.so). Backtrace: 0: X (xorg_backtrace+0x3b) [0x80e938b] 1: X (0x8048000+0x61c8d) [0x80a9c8d] 2: (vdso) (__kernel_rt_sigreturn+0x0) [0x34d410] 3: X (xf86CallDriverProbe+0x182) [0x80b82d2] 4: X (DoConfigure+0x1c8) [0x816b898] 5: X (InitOutput+0x1da) [0x80b98aa] 6: X (0x8048000+0x1ebbb) [0x8066bbb] 7: /lib/tls/i686/cmov/libc.so.6 (__libc_start_main+0xe6) [0x467bd6] 8: X (0x8048000+0x1e961) [0x8066961] Segmentation fault at address (nil) Caught signal 11 (Segmentation fault). Server aborting Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Note the line Can't load FireGL DRM library (libfglrxdrm.so) Note: I do have file /usr/lib/fglrx/xorg/modules/linux/libfglrxdrm.so It is strange that it segfaults whereas I can use Gnome with no problem but well... Might be related: I tried to install the driver from ATI's website recently, and from then glxgears crashes at start. How can I generate xorg.conf in those conditions? It might or might not involve solving the segfault problem.

    Read the article

  • Impossible to install Ubuntu 10.10 dual boot with Windows 7 on new Acer desktop computer

    - by Don Myers
    My brother has a brand new Acer Desktop with Windows 7. I have done many installs (40+) of Ubuntu starting with 8.10, and have never run into this. I've spent three hours trying to do a dual boot install of 10.10. When you get to the place where you normally would choose to install as a dual boot or overwrite the existing information on the hard drive, that block is just blank. Nothing. No choices even to do a manual partition setup. If you try to go on you get the message "No root file system is defined. Please correct this from the partitioning menu." but there is nothing in the partitioning menu. I tried a good 10.04 disc also. Same thing happens with it. I ran a gparted live cd, and it shows the hard drive as sda with 3 partitions on the original. sda1 is a small partition called PQService. sda2 is another small partition called System Reserved, and GParted says it is the boot partition. sda3 is the main partation with the operating system (Windows 7) and all of the empty space. There is a little unallocated space at the very beginning and very end of the hard drive. If I go to places in the Live CD, it shows a 640 gb hard disk called Acer, but it also shows a 640 gb hard disk called system reserved. They are the same disk. There is just one hard drive. If you click properties in the System Reserved 640 gb, it shows all information as unknown. I had to change the boot order in the bios in order to run the live cd. The hard drive instead of being listed as such is listed as Raid:Raid Ready. Something the way this computer is set up is preventing Ubuntu from being able to identify the hard drive partitions at all to do an install, even if you were not doing a dual boot and just wanted to overwrite Windows. Is this a bug that needs reported? This is a major problem for me and my brother, but also for Ubuntu if new users want to Ubuntu and find they cannot install it.

    Read the article

  • Principles of an extensible data proxy

    - by Wesley
    There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS

    Read the article

  • Software center is broken and can not be repaired or reinstalled

    - by Michal
    When I open the software center, I am told that I can not use it, for it is broken, and needs to be repaired. First I try to do this automatically, as I was offered. I enter a root password, and then the installation fails. installArchives() failed: reconfiguring packages... reconfiguring packages... reconfiguring packages... reconfiguring packages... (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 275048 files and directories currently installed.) Unpacking wine1.4-i386 (from .../wine1.4-i386_1.4-0ubuntu4.1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/wine1.4-i386_1.4-0ubuntu4.1_i386.deb (--unpack): trying to overwrite '/usr/bin/wine', which is also in package wine1.5 1.5.5-0ubuntu1~ppa1~oneiric1+pulse17 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/wine1.4-i386_1.4-0ubuntu4.1_i386.deb Error in function: dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4.1); however: Package wine1.4 is not installed. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured What should I do now? First of all, I've tried reinstalling the center, but it failed due to the same 1.4 dependency as is laid out here. I've googled for help and although I don't understand linux at all, I've tried some suggestions: I've tried editing the dpkg status in /var/lib/dpkg/status which failed because the file could not be found. I've tried purging wine/* but that had unresolved dependencies as well. It's a giant mess.

    Read the article

  • GPU hung when switching graphic card

    - by Lie Ryan
    I have a laptop (Dell Inspiron N4110) with a switchable graphic. $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Whistler [AMD Radeon HD 6600M Series] (rev ff) Normally, my laptop starts with both graphic cards enabled, which caused the laptop to turn very hot and the fan to become very noisy. I have been using a small script to disable the Radeon card. For some time, I'm quite happy with this arrangement. However, I have been having some issues with the Intel card (IGD), the Intel card often randomly hang when running OpenGL apps; and so I want to give the Radeon card (DIS) another chance. I have never been able to switch to the Radeon card, but recently, I found out that if I do a "delayed switching" (DDIS): # echo "DDIS" > /sys/kernel/debug/vgaswitcheroo/switch root@lieryan-dell-ubuntu:/sys/kernel/debug/vgaswitcheroo# cat switch 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :Pwr:0000:01:00.0 then I logoff (i.e. to restart X), the screen switch to pseudo-tty and then it stuck there freezing. At this situation, mouse and keyboard stops working so I can't switch to another ptty. I tried ssh-ing from another computer to salvage logs (dmesg at that point) and whatnot; I found out that when freezing, the active graphic card is the AMD card: -- this is from ssh -- # cat switch 0:IGD: :Off:0000:00:02.0 1:DIS:+:Pwr:0000:01:00.0 but the GPU is apparently hung, looking at dmesg gives: ... [ 1411.649974] vga_switcheroo: client 0 refused switch [ 1411.649985] vga_switcheroo: setting delayed switch to client 1 [ 1423.911759] vga_switcheroo: processing delayed switch to 1 [ 1424.006564] fbcon: Remapping primary device, fb1, to tty 1-63 [ 1424.006799] i915: switched off [ 1424.840351] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1425.718088] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1426.622377] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1427.355683] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1428.193549] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id ... the invalid framebuffer id error is repeated for many times over ... I were able to successfully recover by switching back to the Intel card and restarting X from ssh; indicating that only the Radeon card has problems switching. System info: $ uname -a Linux lieryan-dell-ubuntu 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 20:28:43 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric The laptop also do not have the option to set graphic card at BIOS and the proprietary driver, fglrx, also have never worked; when I installed it through jockey ("Additional Drivers"), glxinfo showed that it still being rendered by Mesa, the /sys/kernel/debug/vgaswitcheroo directory has gone missing, and the driver crashes with a traceback if I use xorg.conf to tell X to use fglrx. Anyone had any idea if it is possible to use this AMD card either with the radeon or the fglrx driver? logs: dmesg

    Read the article

  • How does rc job work / order of (contradicting) "start on ..." and "stop on ..." stanzas

    - by Binarus
    Hi, I just can't understand how Upstart's rc job definition in Natty 11.04 works. To illustrate the problem, here is the definition (empty lines and comments are left out): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL Let's suppose we currently are in runlevel 2 and the rc job is stopped (that is exactly the situation after booting my box and logging in via SSH). Now, let's assume that the system switches to runlevel 3, for example due to a command like "telinit 3" given by root. What will happen to the rc job? Obviously, the rc job will be started since it is currently stopped and the event runlevel 3 is matching the start events. But from now on, things are unclear to me: According to the manual $RUNLEVEL evaluates to the new runlevel when the job is started (that means 3 in our example). Therefore, the next stanza "stop on runlevel [!$RUNLEVEL]" translates to "stop on runlevel [!3]"; that means we have a first stanza which will trigger the job, but the second stanza will never stop the job and seems to be useless. Since I know that the Ubuntu / Upstart people won't do useless things, I must be heavily misunderstanding something. I would be grateful for any explanation. While trying to understand this, an additional question came to my mind. If I had contradicting start and stop triggers, for example start on foo stop on foo what would happen? I swear I never will do that, but I am nevertheless very interested in how Upstart handles that on the theoretical level. Thank you very much! Editing the question as a reaction on geekosaur's first answer: I can see the parallelism, but it is not that easy (at least, not to me). Let's assume the job aurrently is still running, and a new runlevel event comes in (of course, the new runlevel is different from the current one). Then, the following should happen: 1) The job is single instance. That means that "start on ..." won't be triggered since the job is currently running; $RUNLEVEL is not touched. 2) "stop on ..." will be triggered since the new runlevel is different from $RUNLEVEL, so the job will be aborted. 3) Now, the job is stopped and waiting. I can't see how it is restarted with the new runlevel. AFAIK, initctl emits events only once, so "start on ..." won't be triggered and the new runlevel won't be entered. I know that I still misunderstanding something, and I am grateful for explanations. Thank you very much!

    Read the article

  • Unauthorized response from Server with API upload

    - by Ethan Shafer
    I'm writing a library in C# to help me develop a Windows application. The library uses the Ubuntu One API. I am able to authenticate and can even make requests to get the Quota (access to Account Admin API) and Volumes (so I know I have access to the Files API at least) Here's what I have as my Upload code: public static void UploadFile(string filename, string filepath) { FileStream file = File.OpenRead(filepath); byte[] bytes = new byte[file.Length]; file.Read(bytes, 0, (int)file.Length); RestClient client = UbuntuOneClients.FilesClient(); RestRequest request = UbuntuOneRequests.BaseRequest(Method.PUT); request.Resource = "/content/~/Ubuntu One/" + filename; request.AddHeader("Content-Length", bytes.Length.ToString()); request.AddParameter("body", bytes, ParameterType.RequestBody); client.ExecuteAsync(request, webResponse => UploadComplete(webResponse)); } Every time I send the request I get an "Unauthorized" response from the server. For now the "/content/~/Ubuntu One/" is hardcoded, but I checked and it is the location of my root volume. Is there anything that I'm missing? UbuntuOneClients.FilesClient() starts the url with "https://files.one.ubuntu.com" UbuntuOneRequests.BaseRequest(Method.{}) is the same requests that I use to send my Quota and Volumes requests, basically just provides all of the parameters needed to authenticate. EDIT:: Here's the BaseRequest() method: public static RestRequest BaseRequest(Method method) { RestRequest request = new RestRequest(method); request.OnBeforeDeserialization = resp => { resp.ContentType = "application/json"; }; request.AddParameter("realm", ""); request.AddParameter("oauth_version", "1.0"); request.AddParameter("oauth_nonce", Guid.NewGuid().ToString()); request.AddParameter("oauth_timestamp", DateTime.Now.ToString()); request.AddParameter("oauth_consumer_key", UbuntuOneRefreshInfo.UbuntuOneInfo.ConsumerKey); request.AddParameter("oauth_token", UbuntuOneRefreshInfo.UbuntuOneInfo.Token); request.AddParameter("oauth_signature_method", "PLAINTEXT"); request.AddParameter("oauth_signature", UbuntuOneRefreshInfo.UbuntuOneInfo.Signature); //request.AddParameter("method", method.ToString()); return request; } and the FilesClient() method: public static RestClient FilesClient() { return (new RestClient("https://files.one.ubuntu.com")); }

    Read the article

  • Broken package after update: linux-headers, error brokencount >0

    - by escozul
    Ubuntu 12.04. After an update, I get a red warning icon in the system tray, warning about an error: broken count 0 Opening Update manager, I see that the broken package is linux-headers-3.2.0-33-generic-pae (new install) Specificaly I have my ubuntu on an AspireOne with 8gb internal storage. I tried apt-get clean as suggested in another question on this site, and tried reinstalling the package in Synaptic. I have tried to reboot but to no avail. I have also tried apt-get install --fix-broken and I get the following: sudo apt-get install --fix-broken [sudo] password for elina: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-headers-3.2.0-33-generic-pae The following NEW packages will be installed: linux-headers-3.2.0-33-generic-pae 0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded. 1 not fully installed or removed. Need to get 0 B/977 kB of archives. After this operation 11,3 MB of additional disk space will be used. Do you want to continue [Y/n]; y (Reading database ... 437051 files and directories currently installed.) Unpacking linux-headers-3.2.0-33-generic-pae (from .../linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb) ... dpkg: error processing /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb (--unpack): unable to create `/usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h.dpkg-new' (while processing `./usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h'): No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried all suggestions I could find: sudo apt-get clean sudo apt-get autoclean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install sudo apt-get install --fix-broken Then I saw that on the error there was a mention about free space. So I did a df -h and the result was: Filesystem Size Used Avail Use% Mounted on /dev/sda1 7,0G 5,5G 1,1G 84% / udev 235M 4,0K 235M 1% /dev tmpfs 97M 816K 96M 1% /run none 5,0M 0 5,0M 0% /run/lock none 242M 352K 242M 1% /run/shm I see that on my root folder I have 1.1Gb free. The broken package is linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb which only takes up 11.3Mb on my hard drive. I'm soooo lost. I really hope there is something I'm missing here. I don't want to go about reformatting this bucket. It's really not worth the time. Any help for fixing this would be hot.

    Read the article

  • Upgarde from Asp.Net MVC 1 to MVC 2 - how to and issues with JsonRequestBehavior

    - by Renso
    Goal Upgrade your MVC 1 app to MVC 2 Issues You may get errors about your Json data being returned via a GET request violating security principles - we also address this here. This post is not intended to delve into why the Json GET request is or may be an issue, just how to resolve it as part of upgrading from MVC1 to 2. Solution First remove all references from your projects to the MVC 1 dll and replace it with the MVC 2 dll. Now update your web.config file in your web app root folder by simply changing references to assembly="System.Web.Mvc, Version 1.0.0.0 to Version 2.0.0.0, there are a couple of references in your config file, here are probably most of them you may have:         <compilation debug="true" defaultLanguage="c#">             <assemblies>                        <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />             </assemblies>         </compilation>           <pages masterPageFile="~/Views/Masters/CRMTemplate.master" pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validateRequest="False">             <controls>                 <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />   Secondly, if you return Json objects from an ajax call via the GET method you ahve several options to fix this depending on your situation: 1. The simplest, as in my case I did this for an internal web app, you may simply do:             return Json(myObject, JsonRequestBehavior.AllowGet);   2. In Mvc if you have a controller base you could wrap the Json method with:         public new JsonResult Json(object data)         {             return Json(data, "application/json", JsonRequestBehavior.AllowGet);                    }   3. The most work would be to decorate your Actions with:         [AcceptVerbs(HttpVerbs.Get)]   4. Another tnat is also a lot of work that needs to be done to every ajax call returning Json is:                             msg = $.ajax({ url: $('#ajaxGetSampleUrl').val(), dataType: 'json', type: 'POST', async: false, data: { name: theClass }, success: function(data, result) { if (!result) alert('Failure to retrieve the Sample Data.'); } }).responseText;   This should cover all the issues you may run into when upgrading. Let me kow if you run into any other ones.

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >