Search Results

Search found 25093 results on 1004 pages for 'console output'.

Page 270/1004 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • aplay -l says no soundcards found; alsaconf says no supported cords; yet /proc/asound contains cards

    - by nimasmi
    I am trying to get HDMI output using a Gainward Nvidia 210 512 MB on Ubuntu 10.04 Lucid Lynx. I have upgraded alsa-driver, alsa-lib and alsa-utils to 1.0.24 by building from source, thanks to this blog post. Some relevant output... user@box:~$ lspci | grep Audio 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 01:09.0 Multimedia video controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder (rev 05) 01:09.2 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [MPEG Port] (rev 05) 01:09.4 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [IR Port] (rev 05) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) user@box:~$ cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.24. Compiled on Sep 15 2012 for kernel 2.6.32-42-generic (SMP). user@box:~$ ls /proc/asound` card0 cards hwdep NVidia oss seq version card1 devices modules NVidia_1 pcm timers user@box:~$ aplay -l aplay: device_list:240: no soundcards found... user@box:~$ sudo /sbin/alsa-utils start * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1403: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... amixer: Invalid command! ...done. Any help appreciated. PS my video card is connected only through the PCI-E slot. I assume there is no extra audio connection required.

    Read the article

  • How can I automatically lower Spotify playback volume when my Video Editing program makes a sound?

    - by Mark Major
    I'd like to listen to Spotify while I am video editing. This is just casual listening - nothing to do with the editing work. How can I automatically fade out the volume of Spotify when my video editing program plays audio? I often need to hear the video editing audio without the distraction of Spotify playing over the top, but the video editing playback is too on/off/on/off to switch Spotify audio manually each time. Without background music, I get really sick of the repeated playback of the audio clips with only silence inbetween. I suppose what I needd is an app that monitors sound output from 'App A' and reduces the sound output from all others (Apps B, C, D, etc) when something is played.

    Read the article

  • Testing a codebase with sequential cohesion

    - by iveqy
    I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • StringBuffer behavior in LWJGL

    - by Michael Oberlin
    Okay, I've been programming in Java for about ten years, but am entirely new to LWJGL. I have a specific problem whilst attempting to create a text console. I have built a class meant to abstract input polling to it, which (in theory) captures key presses from the Keyboard object and appends them to a StringBuilder/StringBuffer, then retrieves the completed string after receiving the ENTER key. The problem is, after I trigger the String return (currently with ESCAPE), and attempt to print it to System.out, I consistently get a blank line. I can get an appropriate string length, and I can even sample a single character out of it and get complete accuracy, but it never prints the actual string. I could swear that LWJGL slipped some kind of thread-safety trick in while I wasn't looking. Here's my code: static volatile StringBuffer command = new StringBuffer(); @Override public void chain(InputPoller poller) { this.chain = poller; } @Override public synchronized void poll() { //basic testing for modifier keys, to be used later on boolean shift = false, alt = false, control = false, superkey = false; if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) || Keyboard.isKeyDown(Keyboard.KEY_RSHIFT)) shift = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMENU) || Keyboard.isKeyDown(Keyboard.KEY_RMENU)) alt = true; if(Keyboard.isKeyDown(Keyboard.KEY_LCONTROL) || Keyboard.isKeyDown(Keyboard.KEY_RCONTROL)) control = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMETA) || Keyboard.isKeyDown(Keyboard.KEY_RMETA)) superkey = true; while(Keyboard.next()) if(Keyboard.getEventKeyState()) { command.append(Keyboard.getEventCharacter()); } if (Framework.isConsoleEnabled() && Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { System.out.println("Escape down"); System.out.println(command.length() + " characters polled"); //works System.out.println(command.toString().length()); //works System.out.println(command.toString().charAt(4)); //works System.out.println(command.toString().toCharArray()); //blank line! System.out.println(command.toString()); //blank line! Framework.disableConsole(); } //TODO: Add command construction and console management after that } } Maybe the answer's obvious and I'm just feeling tired, but I need to walk away from this for a while. If anyone sees the issue, please let me know. This machine is running the latest release of Java 7 on Ubuntu 12.04, Mate desktop environment. Many thanks.

    Read the article

  • Linux service --status-all shows "Firewall is stopped." what service does firewall refer to?

    - by codewaggle
    I have a development server with the lamp stack running CentOS: [Prompt]# cat /etc/redhat-release CentOS release 5.8 (Final) [Prompt]# cat /proc/version Linux version 2.6.18-308.16.1.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-52)) #1 SMP Tue Oct 2 22:50:05 EDT 2012 [Prompt]# yum info iptables Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.anl.gov * extras: centos.mirrors.tds.net * rpmfusion-free-updates: mirror.us.leaseweb.net * rpmfusion-nonfree-updates: mirror.us.leaseweb.net * updates: mirror.steadfast.net Installed Packages Name : iptables Arch : x86_64 Version : 1.3.5 Release : 9.1.el5 Size : 661 k Repo : installed .... Snip.... When I run: service --status-all Part of the output looks like this: .... Snip.... httpd (pid xxxxx) is running... Firewall is stopped. Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) ....Snip.... iptables has been loaded to the kernel and is active as represented by the rules being displayed. Checking just the iptables returns the rules just like status all does: [Prompt]# service iptables status Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) .... Snip.... Starting or restarting iptables indicates that the iptables have been loaded to the kernel successfully: [Prompt]# service iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] [Prompt]# service iptables start Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] I've googled "Firewall is stopped." and read a number of iptables guides as well as the RHEL documentation, but no luck. As far as I can tell, there isn't a "Firewall" service, so what is the line "Firewall is stopped." referring to?

    Read the article

  • How do I install VirtualBox in 13.04?

    - by user155708
    I install the application using the .deb, but I can't get a virtual machine to boot. How to install it correctly from the beginning? Opening my newly-created XP machine yields this message: Failed to open a session for the virtual machine Windows fucking sucks. VT-x features locked or unavailable in MSR. (VERR_VMX_MSR_LOCKED_OR_DISABLED). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {db7ab4ca-2a3f-4183-9243-c1208da92392}

    Read the article

  • Second Monitor "Input Not Supported"

    - by Drew
    I have two identical monitors (new Acer s211hl) with a native resolution of 1920x1080. When enabling dual monitor support in Windows 7, the primary monitor works as expected, but the second monitor says, "Input not Supported" and fails to display anything. If I change the resolution of the second monitor to 1440x900, it works as expected. Likewise, if I set it to 1920x1080 with a refresh rate of 30hz, the monitor displays video. However, neither of these are solutions, because the output looks very blurry, and the content is stretched. I am using the following hardware: Monitors: Acer s211hl Motherboard: Asus F1A75M-Pro CPU/GPU: AMD A8-3850 with integrated Radeon HD 6550D graphics I suspect that there is probably an issue with the integrated graphics or motherboard not being able to output to two 1920x1080 monitors, but I am hoping for official confirmation.

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • Add & Show data in Java ArrayList [closed]

    - by Kaidul Islam Sazal
    I have a class inside a main class : static class Graph{ static int u, v, cost; } I have instantiated an arraylist of the class: static List<Graph> g = new ArrayList<Graph>(); And I insert several values into the arraylist like this: Scanner input = new Scanner(System.in); for (int i = 0; i < edge_no; i++) { Graph e = new Graph(); e.u = input.nextInt(); e.v = input.nextInt(); e.cost = input.nextInt(); g.add(e); } And I print it like this: for (int i = 0; i < edge_no; i++) { System.out.println(g.get(i).u + " " + g.get(i).v + " " + g.get(i).cost); } But the problem is that, when I print it, only the last value is shown all the time.It seems that, all the previous values are over-written with it. Input : 1 2 5 1 3 8 2 3 9 Output: 2 3 9 2 3 9 2 3 9 Expected output is just like the input.But I can't fix the problem as I am novice in java.

    Read the article

  • Ubuntu 3D does not work on Dell system with a AMD Radeon HD 6470M

    - by VeeKay
    I am running 64 bit Ubuntu on Dell with 1GB graphic card. I login with "Ubuntu" hoping to see Unity 3d but it doesn't. Unity 2D runs instead. when I type in echo "$DESKTOP_SESSION" it confirms the Unity-2D. I've checked the System info that shows like : The graphics row shows itself as empty. SO I've presumed that the graphic drivers aren't detected and hence I went to Unity- Additional Drivers and installed the fglrx driver that the UI has suggested. Even after installing so, the graphics part in System info details shows nothing and still Unity 2D runs in spite of all the effort. Please help! How can I get my Unity 3D back? Hardware Info Video Card : AMD Radeon™ HD 6470M - 1GB (For ICC) RAM : 6GB (1 X 2GB + 1 X 4GB) 2 DIMM DDR3 1333Mhz OS : 64 bit Ubuntu 11.10 Edit : Output for /usr/lib/nux/unity_support_test -p X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 21 Current serial number in output stream: 21

    Read the article

  • Client-side prediction for FPS

    - by newprogrammer
    People that understand client-side prediction and client-side interpolation, I have a question: When I play the game Team Fortress 2, and type cl_predict 1 into the developer's console, it enables client-side prediction. The also says "6 predictable entities reinitialized". It says this regardless of how many players are on the server, which makes sense, because other players are not predictable entities. I thought client-side prediction was only for the movement of the player. Are there other entities that the client can provide prediction for?

    Read the article

  • Make mod_wsgi use python2.7.2 instead of python2.6?

    - by guron
    i am running Ubuntu 10.04.1 LTS and it came pre-packed with python2.6 but i need to replace it with python2.7.2. (The reason is simple, 2.7 has a lot of features backported from 3 ) i had installed python2.7.2 using ./configure make make altinstall the altinstall option installed it, without touching the system default version, to /usr/local/lib/python2.7 and placed the interpreter in /usr/local/bin/python2.7 Then to help mod_wsgi find python2.7 i added the following to /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local i start apache and run a test wsgi app BUT i am greeted by python 2.6.5 and not Python2.7 Later i replaced the default python simlink to point to python 2.7 ln -f /usr/local/bin/python2.7 /usr/bin/python Now typing 'python' on the console opens python2.7 but somehow mod_wsgi still picks up python2.6 Next i tried, PATH=/usr/local/bin:$PATH export PATH then do a quick restart apache, but yet again its python2.6 !! Here is my $PATH /usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games contents of /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local <VirtualHost *:80> ServerName wsgitest.local DocumentRoot /home/wwwhost/pydocs/wsgi <Directory /home/wwwhost/pydocs/wsgi> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /home/wwwhost/pydocs/wsgi/app.wsgi </VirtualHost> app.wsgi import sys def application(environ, start_response): status = '200 OK' output = sys.version response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Apache error.log 'import site' failed; use -v for traceback [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23235): Initializing Python. [Sun Jun 19 00:27:21 2011] [notice] Apache/2.2.14 (Ubuntu) mod_wsgi/2.8 Python/2.6.5 configured -- resuming normal operations [Sun Jun 19 00:27:21 2011] [info] Server built: Nov 18 2010 21:20:56 [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23238): Attach interpreter ''. [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23239): Attach interpreter ''. [Sun Jun 19 00:27:31 2011] [info] mod_wsgi (pid=23238): Create interpreter 'wsgitest.local|'. [Sun Jun 19 00:27:31 2011] [info] [client 192.168.1.205] mod_wsgi (pid=23238, process='', application='wsgitest.local|'): Loading WSGI script '/home/wwwhost/pydocs/$ [Sun Jun 19 00:27:50 2011] [info] mod_wsgi (pid=23239): Create interpreter 'wsgitest.local|'. Has anybody ever managed to make mod_wsgi run on a non-system default version of python ?

    Read the article

  • can't get past the login screen

    - by Greg
    Using a brand-new install to a usb stick of 12.04 lts installed by Universal USB Installer 1.8.9.8. I log in as "ubuntu" with a blank password, the console appears for a second or two with text scrolling past and then it returns to the login page. I've used the same usb stick on several computers with the same results, so it doesn't appear to be a hardware/driver issue. I have not tried installing to the hard drive, because I wanted to try it out first.

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • mpstat on slackware 13.0 shows no utilization

    - by conartist6
    As the title says, the mpstat command, executed on Slack 13.0 continuously shows almost no processor utilization of any sort. In fact none of the output ever seems to change at all. The system is dual processor board with two hyperthreaded P4 Xeons. Any ideas? 08:50:06 PM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s 08:50:06 PM all 0.38 0.00 0.03 0.03 0.00 0.00 0.00 99.56 1510.46 08:50:06 PM 0 0.50 0.00 0.05 0.10 0.00 0.01 0.00 99.33 11.90 08:50:06 PM 1 0.32 0.00 0.03 0.01 0.00 0.00 0.00 99.64 0.00 08:50:06 PM 2 0.38 0.00 0.03 0.01 0.00 0.00 0.00 99.58 0.00 08:50:06 PM 3 0.29 0.00 0.02 0.00 0.00 0.00 0.00 99.68 0.00 This is, literally, the only output I can get from the program. No values change ever.

    Read the article

  • Procurement and E-Business Suite Product Analyzers .. Can you use this tool to resolve your SR?

    - by LindaJ-Oracle
    Procurement and E-Business Suite Product Analyzers (Doc ID 1545562.1). Analyzers are Query/Read only tools with easy to read html output. The tools are delivered by EBS Support via My Oracle Support documents ids for ease of use. The Analyzer scripts are meant to be part of your Production maintenance program by your Sysadmin, or to designated end users. The result set is an easy to read html output that provides recommendations, solutions and early warnings to of items that should be reviewed and correct. Each analyzer can be ran on demand or scheduled for repeatability and emailed to critical reviewers. There are several Analyzers available for E-Business Suite Applications Technology Group, Financials, and Manufacturing including some of the following topics.  Review them all at (Doc ID 1545562.1). Workflow Concurrent Processing Clone Log Parser Utility (Rapid Clone) Invoices, Payments, Accounting, Suppliers and EBTax Validate Data before Period Close EBTax Setup Payables Trial Balance Internet Expenses AutoInvoice Post-Process ASCP Performance PO Approval iProcurement Items For the Procurement specific Analyzers access them directly at: R12 IP Item Analyzer Diagnostic Script (Doc ID 1586248.1) R12: PO Approval Analyzer Diagnostic Script (Doc ID 1525670.1)

    Read the article

  • How do I stop video tearing? (Nvidia prop driver, non-compositing window manager)

    - by Chan-Ho Suh
    I have that problem which seemingly afflicts many using the proprietary Nvidia driver: Video tearing: fine horizontal lines (usually near the top of my display) when there is a lot of panning or action in the video. (Note: switching back to the default nouveau driver is not an option, as its seemingly nonexistent power-management drains my battery several times faster) I've tried Totem, Parole, and VLC, and tearing occurs with all of them. The best result has been to use X11 output in VLC, but there is still tearing with relatively moderate action. Hardware: MacBook Air 3,2 -- which has an Nvidia GeForce 320M. There are two common fixes for tearing with Nvidia prop drivers: Turn off compositing, since Nvidia proprietary drivers don't usually play nice with compositing window managers on Linux (Compiz is an exception I'm aware of). But I use an extremely lightweight window manager (Awesome window manager) which is not even capable of compositing (or any cool effects). I also have this problem in Xfce, where I have compositing disabled. Enabling sync to VBlank. To enable this, I set the option in nvidia-settings and then autostart it as nvidia-settings -l with my other autostart programs. This seems to work, because when I run glxgears, I get: $ glxgears Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. 303 frames in 5.0 seconds = 60.500 FPS 300 frames in 5.0 seconds = 59.992 FPS And when I check the refresh rate using nvidia-settings: $ nvidia-settings -q RefreshRate Attribute 'RefreshRate' (wampum:0.0; display device: DFP-2): 60.00 Hz. All this suggests sync to VBlank is enabled. As I understand it, this is precisely designed to stop tearing, and a lot of people's problem is even getting something like glxgears to output the correct info. I don't understand why it's not working for me. xorg.conf: http://paste.ubuntu.com/992056/ Example of observed tearing::

    Read the article

  • SQL Server 2012 Integration Services- Using Environments in Package Execution

    SQL Server 2012 Integration Services offers several different options for deploying and storing SSIS packages along with their associated projects, two of which are directly related to two deployment models available in SQL Server Data Tools console. Marcin Policht presents one of these methods, which deals with packages deployed using Project Deployment Model and leverages newly introduced Environments.

    Read the article

  • linux: upload / download difference on network shares

    - by Batsu
    I have a Red Hat Enterprise Linux 6 (with SELinux) which shows significant differences of speed between download and upload (the latter significantly slower) of files shared over the LAN. The bottleneck seems to be the output of the linux machine since I have a rate around 1Mb/s when WinXP machines download files shared (using samba) by the RHEL machine uploading files from the RHEL to a WinXP's shared folder while uploading from the XP machines to linux's shares downloading XPs' shares on the RHEL any share between Windows machines only run smooth (around 50Mb/s). Since the upload from RHEL to WinXP's share is slowed too I would exclude an issue in the configuration of samba. What could possibly determine this limit in the upload speed? update: iptables doesn't show any output rule and disabling it doesn't show any noticeable difference, so I would rule out it too.

    Read the article

  • Google I/O Sandbox Case Study: VectorUnit

    Google I/O Sandbox Case Study: VectorUnit We interviewed VectorUnit at the Google I/O Sandbox on May 11, 2011 and they explained to us the benefits of building for the Android Platform. VectorUnit creates console-quality video games for the Android. For more information on Android developers, visit: developers.android.com For more information on VectorUnit, visit vectorunit.com From: GoogleDevelopers Views: 13 0 ratings Time: 01:33 More in Science & Technology

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >