Search Results

Search found 20140 results on 806 pages for 'output formatting'.

Page 595/806 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Ubuntu 12.10 graphics does not work properly

    - by madox2
    My graphic on ubuntu 12.10 does not work as well as on 12.04. After upgrade I installed driver sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current for my Nvidia 450 GTS graphics card. But sometimes I see slight lag on my videos played in VLC player, some of desktop and window effects are lagging, sometimes I can see an indescribable souce of pixels on my screen at the start of ubuntu and so on. I feel difference between 12.04 and 12.10 in favour of former version. Does anyone know whats wrong or what I am missing? here is output of lspci -k: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) Kernel driver in use: pcieport Kernel modules: shpchp 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: Giga-byte Technology Device 1c3a Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) Subsystem: Giga-byte Technology Device 5006 Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Giga-byte Technology Device a000 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) Subsystem: Giga-byte Technology Device 5006 Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation H61 Express Chipset Family LPC Controller (rev 05) Subsystem: Giga-byte Technology Device 5001 Kernel driver in use: lpc_ich Kernel modules: lpc_ich 00:1f.2 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 4 port SATA IDE Controller (rev 05) Subsystem: Giga-byte Technology Device b002 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: Giga-byte Technology Device 5001 Kernel modules: i2c-i801 00:1f.5 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 2 port SATA IDE Controller (rev 05) Subsystem: Giga-byte Technology Device b002 Kernel driver in use: ata_piix 01:00.0 VGA compatible controller: NVIDIA Corporation GF116 [GeForce GTS 450] (rev a1) Subsystem: CardExpert Technology Device 0401 Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 01:00.1 Audio device: NVIDIA Corporation GF116 High Definition Audio Controller (rev a1) Subsystem: CardExpert Technology Device 0401 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 03:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) Subsystem: Giga-byte Technology Device e000 Kernel driver in use: atl1c Kernel modules: atl1c

    Read the article

  • How can I fix broken i915 drivers for Intel GPUs?

    - by Alen Mujezinovic
    I've got troubles getting the i915 drivers to work correctly on my laptop (HP Pavilion DM4 2101ea). Specifically, the laptop screen goes black and stays black after the splash graphic when booting both from USB key and from harddrive. To get anything on to the display after the splash screen I have to boot either with acpi=off nomodeset i915.modeset=0 I'd rather not turn ACPI off because I like my fans spinning and nomodeset is a bit overkill, so for now I'm booting with i915.modeset=0. Unfortunately, this turns off KMS and my current maximum resolution on the laptop screen is fixed to 1024x768 instead of its real capability. When not setting any of the above boot flags and I plug in an external monitor, the external monitor works fine. When booting with the flags, the external monitor works fine too, but can only do 1024x768 and can't do anything else than mirroring the laptop display. I did upgrade the i915 drivers from 2.17 that ship with Precise to 2.19 which are the most recent ones but without luck of getting anything to display. Here's my lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 08:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) Here's lshw -C video *-display UNCLAIMED description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list configuration: latency=0 resources: memory:c0000000-c03fffff memory:b0000000-bfffffff ioport:4000(size=64) Both outputs are generated after booting with i915.modeset=0. Here's a complete Xorg.log file from a boot into a black screen: https://gist.github.com/479ce06454e47d6123e1 The graphics card is a Intel HD 3000 integrated GPU. I've never had problems with Intel hardware on Ubuntu before so this is very surprising. If you could provide a method to make i915 work, suggest alternative drivers a way to boot with i915.modeset=0 but higher resolutions and KMS on or explain what is happening and how to fix it I'll give you an answer badge. :) Thanks

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4)

    - by hinkmond
    And now here's the Java code that you'll need to read your ghost sensor on your Raspberry Pi The general idea is that you are using Java code to access the GPIO pin on your Raspberry Pi where the ghost sensor (JFET trasistor) detects minute changes in the electromagnetic field near the Raspberry Pi and will change the GPIO pin to high (+3 volts) when something is detected, otherwise there is no value (ground). Here's that Java code: try { /*** Init GPIO port(s) for input ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); for (String gpioChannel : GpioChannels) { System.out.println(gpioChannel); // Reset the port File exportFileCheck = new File("/sys/class/gpio/gpio"+gpioChannel); if (exportFileCheck.exists()) { unexportFile.write(gpioChannel); unexportFile.flush(); } // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write(GPIO_IN); } /*** Read data from each GPIO port ***/ RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; int sleepPeriod = 10; final int MAXBUF = 256; byte[] inBytes = new byte[MAXBUF]; String inLine; int zeroCounter = 0; // Get current timestamp with Calendar() Calendar cal; DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss.SSS"); String dateStr; // Open RandomAccessFile handle to each GPIO port for (int channum=0; channum And, then we just load up our Java SE Embedded app, place each Raspberry Pi with a ghost sensor attached in strategic locations around our Santa Clara office (which apparently is very haunted by ghosts from the Agnews Insane Asylum 1906 earthquake), and watch our analytics for any ghosts. Easy peazy. See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Hinkmond

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • How to avoid the exception “Substitution controls cannot be used in cached User Controls or cached M

    - by DigiMortal
    Recently I wrote example about using user controls with donut caching. Because cache substitutions are not allowed inside partially cached controls you may get the error Substitution controls cannot be used in cached User Controls or cached Master Pages when breaking this rule. In this posting I will introduce some strategies that help to avoid this error. How Substitution control checks its location? Substitution control uses the following check in its OnPreRender method. protected internal override void OnPreRender(EventArgs e) {     base.OnPreRender(e);     for (Control control = this.Parent; control != null;          control = control.Parent)     {         if (control is BasePartialCachingControl)         {             throw new HttpException(SR.GetString("Substitution_CannotBeInCachedControl"));         }     } } It traverses all the control tree up to top from its parent to find at least one control that is partially cached. If such control is found then exception is thrown. Reusing the functionality If you want to do something by yourself if your control may cause exception mentioned before you can use the same code. I modified the previously shown code to be method that can be easily moved to user controls base class if you have some. If you don’t you can use it in controls where you need this check. protected bool IsInsidePartialCachingControl() {     for (Control control = Parent; control != null;         control = control.Parent)         if (control is BasePartialCachingControl)             return true;       return false; } Now it is up to you how to handle the situation where your control with substitutions is child of some partially cache control. You can add here also some debug level output so you can see exactly what controls in control hierarchy are cached and cause problems.

    Read the article

  • Why does switching users completely hang my system every time?

    - by Stéphane
    I have a fresh install of 11.04 64bit, with 2 administrator accounts and 4 normal accounts. The 4 normal accounts (the kids' accounts) don't have passwords, they can login simply by clicking on their names. When any of the users -- either admin or normal -- tries to switch to another account by clicking in the top-right corner of the screen and selecting another user, the screen goes black and the entire system locks up. Even CTRL+ALT+F1 through F7 does nothing. This is reproducible 100% of the time on this system. I can ssh into the box when the console locks up, and by running top, I see that Xorg is consuming about 100% of the CPU. Looking at the output of "ps axfu" in bash while the system is in this "locked up" state, here is the lightdm and X process tree: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1153 0.0 0.1 183508 4292 ? Ssl Dec26 0:00 lightdm root 2187 0.4 4.6 265976 164168 tty7 Ss+ 00:43 0:21 \_ /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch stephane 2612 0.0 0.3 266400 10736 ? Ssl 01:52 0:00 \_ /usr/bin/gnome-session --session=ubuntu stephane 2650 0.0 0.0 12264 276 ? Ss 01:52 0:00 | \_ /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session /usr/bin/gnome-session --session=ubuntu stephane 2703 0.8 3.0 562068 106548 ? Sl 01:52 0:08 | \_ compiz stephane 2801 0.0 0.0 4264 584 ? Ss 01:52 0:00 | | \_ /bin/sh -c /usr/bin/compiz-decorator stephane 2802 0.0 0.3 265744 13772 ? Sl 01:52 0:00 | | \_ /usr/bin/unity-window-decorator ...cut... root 3024 80.6 0.3 107928 13088 tty8 Rs+ 01:53 12:34 \_ /usr/bin/X :1 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch That last process, pid #3024 in this case, is what has the CPU pegged. In case it matters (I suspect it might) here is what I think may be the relevant information for my video card, taken from /var/log/Xorg.0.log: [ 3392.653] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so [ 3392.653] (II) Module glx: vendor="FireGL - AMD Technologies Inc." [ 3392.653] compiled for 6.9.0, module version = 1.0.0 ... [ 3392.655] (II) LoadModule: "fglrx" [ 3392.655] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so [ 3392.672] (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." [ 3392.672] compiled for 1.4.99.906, module version = 8.88.7 [ 3392.672] Module class: X.Org Video Driver ... [ 3392.759] (==) fglrx(0): ATI 2D Acceleration Architecture enabled [ 3392.759] (--) fglrx(0): Chipset: "AMD Radeon HD 6410D" (Chipset = 0x9644) Lastly: I did see this posting: Change user on 11.10 hangs system ...but I checked, and the libpam-smbpass package isn't installed on this system.

    Read the article

  • Finding furthermost point in game world

    - by user13414
    I am attempting to find the furthermost point in my game world given the player's current location and a normalized direction vector in screen space. My current algorithm is: convert player world location to screen space multiply the direction vector by a large number (2000) and add it to the player's screen location to get the distant screen location convert the distant screen location to world space create a line running from the player's world location to the distant world location loop over the bounding "walls" (of which there are always 4) of my game world check whether the wall and the line intersect if so, where they intersect is the furthermost point of my game world in the direction of the vector Here it is, more or less, in code: public Vector2 GetFurthermostWorldPoint(Vector2 directionVector) { var screenLocation = entity.WorldPointToScreen(entity.Location); var distantScreenLocation = screenLocation + (directionVector * 2000); var distantWorldLocation = entity.ScreenPointToWorld(distantScreenLocation); var line = new Line(entity.Center, distantWorldLocation); float intersectionDistance; Vector2 intersectionPoint; foreach (var boundingWall in entity.Level.BoundingWalls) { if (boundingWall.Intersects(line, out intersectionDistance, out intersectionPoint)) { return intersectionPoint; } } Debug.Assert(false, "No intersection found!"); return Vector2.Zero; } Now this works, for some definition of "works". I've found that the further out my distant screen location is, the less chance it has of working. When digging into the reasons why, I noticed that calls to Viewport.Unproject could result in wildly varying return values for points that are "far away". I wrote this stupid little "test" to try and understand what was going on: [Fact] public void wtf() { var screenPositions = new Vector2[] { new Vector2(400, 240), new Vector2(400, -2000), }; var viewport = new Viewport(0, 0, 800, 480); var projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, viewport.Width / viewport.Height, 1, 200000); var viewMatrix = Matrix.CreateLookAt(new Vector3(400, 630, 600), new Vector3(400, 345, 0), new Vector3(0, 0, 1)); var worldMatrix = Matrix.Identity; foreach (var screenPosition in screenPositions) { var nearPoint = viewport.Unproject(new Vector3(screenPosition, 0), projectionMatrix, viewMatrix, worldMatrix); var farPoint = viewport.Unproject(new Vector3(screenPosition, 1), projectionMatrix, viewMatrix, worldMatrix); Console.WriteLine("For screen position {0}:", screenPosition); Console.WriteLine(" Projected Near Point = {0}", nearPoint.TruncateZ()); Console.WriteLine(" Projected Far Point = {0}", farPoint.TruncateZ()); Console.WriteLine(); } } The output I get on the console is: For screen position {X:400 Y:240}: Projected Near Point = {X:400 Y:629.571 Z:599.0967} Projected Far Point = {X:392.9302 Y:-83074.98 Z:-175627.9} For screen position {X:400 Y:-2000}: Projected Near Point = {X:400 Y:626.079 Z:600.7554} Projected Far Point = {X:390.2068 Y:-767438.6 Z:148564.2} My question is really twofold: what am I doing wrong with the unprojection such that it varies so wildly and, thus, does not allow me to determine the corresponding world point for my distant screen point? is there a better way altogether to determine the furthermost point in world space given a current world space location, and a directional vector in screen space?

    Read the article

  • Issues with LVM partition size in Server 13.04

    - by Michael
    I am new to ubuntu and a little confused about how hard drive partitions and LVM works. I remember setting up Ubuntu server 13.04 and telling to to use 1TB of a 3TB server. Well I have maxed that out with blu-ray rips and want the rest of the drive for space. On log-in it says: System load: 2.24 Processes: 179 Usage of /: 88.7% of 912.89GB Users logged in: 0 Memory usage: 6% IP address for p5p1: 192.168.0.100 Swap usage: 0% => / is using 88.7% of 912.89GB lvdisplay outputs: --- Logical volume --- LV Path /dev/DeathStar-vg/root LV Name root VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 1 LV Size 2.70 TiB Current LE 707789 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/DeathStar-vg/swap_1 LV Name swap_1 VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 2 LV Size 3.75 GiB Current LE 959 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 vgdisplay outputs: VG Name DeathStar-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TiB PE Size 4.00 MiB Total PE 715335 Alloc PE / Size 708748 / 2.70 TiB Free PE / Size 6587 / 25.73 GiB df outputs: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/DeathStar--vg-root 957238932 848972636 59634696 94% / none 4 0 4 0% /sys/fs/cgroup udev 1864716 4 1864712 1% /dev tmpfs 374968 1060 373908 1% /run none 5120 4 5116 1% /run/lock none 1874824 148 1874676 1% /run/shm none 102400 24 102376 1% /run/user /dev/sda2 234153 56477 165184 26% /boot And fdisk /dev/sda -l outputs: Disk /dev/sda: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. I just don't know what to make of all this and am not sure how I can make it use all 2.73TBs. Thanks in advance for any help. EDIT-- Yes I did make changes to the LVM Config, but it didnt do anything. As requested, output of parted -l /dev/sda Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 258MB 256MB ext2 3 258MB 3001GB 3000GB lvm Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-swap_1: 4022MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 4022MB 4022MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-root: 2969GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2969GB 2969GB ext4

    Read the article

  • Is SOAP Http POST more complicated than I thought

    - by Pete Petersen
    I'm currently writing a bit of code to send some xml data to a web service via HTTP POST. I thought this would be really simple and have written the following example code (C#) Console.WriteLine("Press enter to send data..."); while (Console.ReadLine() != "q") { HttpWebRequest httpWReq = (HttpWebRequest)WebRequest.Create(@"http://localhost:8888/"); Foo fooItem = new Foo { Member1 = "05", Member2 = "74455604", Member3 = "15101051", Member4 = 1, Member5 = "fsf", Member6 = 6.52, }; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = fooItem.ToXml(); byte[] data = encoding.GetBytes(postData); httpWReq.Method = "POST"; httpWReq.ContentType = "application/xml"; httpWReq.ContentLength = data.Length; using (Stream stream = httpWReq.GetRequestStream()) { stream.Write(data, 0, data.Length); } HttpWebResponse response = (HttpWebResponse)httpWReq.GetResponse(); string responseString = new StreamReader(response.GetResponseStream()).ReadToEnd(); Console.WriteLine("Received " + responseString); Console.WriteLine("Press enter to send data..."); } This is all I thought would be necessary, however I have now been given the details for the web service. This included some information which is unfarmiliar to me and I'm unsure whether I need to include it. The information I was sent was <url>http://sometext/soap/rpc</url> <namespace>http://sometext/a.services</namespace> <method>receiveInfo</method> <parm-id>xmldata</parm-id> (Input data) (Actual XML data as string) <parm-id>status</parm-id> (Output data) <userid>user</userid> <password>pass</password> <secure>false</secure> I guess this means I need to include a username and password somehow, but I'm not sure what the namespace or method fields are used for. Could anyone give me a hint? Sorry I've never used webservices before.

    Read the article

  • Building SANE from git-source produce backend missmatch on 12.04 even if built locally

    - by deinonychusaur
    It seems to me that with Ubuntu Precise Pangolin it is all but easy to do a proper install of SANE from source (git-repo). I've found other scanning issues trying to find an answer to this, where the output people posted seems to indicate they suffer the same issue (unknowingly). If I run on a fresh install of Ubuntu 12.04 with compiled SANE source from the git I get: $ scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.22 (I basically followed the instructions on http://ubuntuportal.com/2012/02/how-to-get-an-canon-canoscan-lide-100-scanner-to-work-in-ubuntu-11-10linux-mint-12.html since I didn't find any other information making sure that sane was not installed prior to installation.) My primary interest is the epson2-backend. In 1.0.22 it offers the wrong TPU settings for Epson V700 (TPU2-mode wasn't supported in 1.0.22, and the scanner is useless to me if I don't have the TPU2-support). Since if I ask it to enter transparency mode, it shows 1.0.22 behaviour, it implies that the epson2-backend comes from 1.0.22 and not 1.0.24 even though I just built it. If I install SANE with prefix to a local folder and run that version of scanimage it still produces the mismatch. However, on another computer where I installed a custom 1.0.22 build of SANE prior to upgrading to Ubuntu 12.04, I can build and install the same SANE-git locally and have it correctly match backends: $ ./SANE/bin/scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.24 $ scanimage -V scanimage (sane-backends) 1.0.22; backend version 1.0.22 On this computer the 1.0.24 works correctly in finding TPU2 on Epson V700. So what am I missing/doing wrong? (And I want to replace 1.0.22 with 1.0.24 for the whole system, the local build was just debugging.) Any help would be much appreciated. Edit 1: Just tried compiling SANE using this instruction on Ubuntu 10.04 and it worked like a charm. However, when I upgraded to 12.04 (really would like to run 12.04), SANE was downgraded to 1.0.22. When trying the same set of instructions on 12.04 I was still out of luck -- the backend missmatch was there again (and I do have libusb-dev installed) Edit 2: I updated to Ubuntu 12.10 which now has the 1.0.23 SANE drivers. I haven't dared trying to compile from source on 12.10 since 1.0.23 is good enough for me. This is just a work-around and I would still like to know what's up with Ubuntu 12.04.

    Read the article

  • Inline template efficiency

    - by Darryl Gove
    I like inline templates, and use them quite extensively. Whenever I write code with them I'm always careful to check the disassembly to see that the resulting output is efficient. Here's a potential cause of inefficiency. Suppose we want to use the mis-named Leading Zero Detect (LZD) instruction on T4 (this instruction does a count of the number of leading zero bits in an integer register - so it should really be called leading zero count). So we put together an inline template called lzd.il looking like: .inline lzd lzd %o0,%o0 .end And we throw together some code that uses it: int lzd(int); int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } We compile the code with some amount of optimisation, and look at the resulting code: $ cc -O -xtarget=T4 -S lzd.c lzd.il $ more lzd.s .L77000018: /* 0x001c 11 */ lzd %o0,%o0 /* 0x0020 9 */ ld [%i1],%i3 /* 0x0024 11 */ st %o0,[%i2] /* 0x0028 9 */ add %i3,1,%i0 /* 0x002c */ cmp %i0,999 /* 0x0030 */ ble,pt %icc,.L77000018 /* 0x0034 */ st %i0,[%i1] What is surprising is that we're seeing a number of loads and stores in the code. Everything could be held in registers, so why is this happening? The problem is that the code is only inlined at the code generation stage - when the actual instructions are generated. Earlier compiler phases see a function call. The called functions can do all kinds of nastiness to global variables (like 'a' in this code) so we need to load them from memory after the function call, and store them to memory before the function call. Fortunately we can use a #pragma directive to tell the compiler that the routine lzd() has no side effects - meaning that it does not read or write to memory. The directive to do that is #pragma no_side_effect(<routine name), and it needs to be placed after the declaration of the function. The new code looks like: int lzd(int); #pragma no_side_effect(lzd) int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } Now the loop looks much neater: /* 0x0014 10 */ add %i1,1,%i1 ! 11 ! { ! 12 ! c=lzd(c); /* 0x0018 12 */ lzd %o0,%o0 /* 0x001c 10 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000018 /* 0x0024 */ nop

    Read the article

  • Problem getting GOBI 2000 HS to work

    - by Zypher
    I've been trying to get my integrated GOBI WWAN card to work under 10.10 for a while now. I was able to get the network manager to see the card after installing the gobi-loader package. I was able to setup the connection, but i cannot establish a connection to Verizon. Below is the output from /var/log/daemon.log as i try to connect. Oct 19 14:29:42 gbeech-x201 AptDaemon: INFO: Quiting due to inactivity Oct 19 14:29:42 gbeech-x201 AptDaemon: INFO: Shutdown was requested Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) starting connection 'Verizon connection' Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 3 -> 4 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) scheduled... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) started... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 4 -> 6 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) complete. Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) scheduled... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) started... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 6 -> 4 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) complete. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <warn> CDMA connection failed: (32) No service Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 4 -> 9 (reason 0) Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Marking connection 'Verizon connection' invalid. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <warn> Activation (ttyUSB0) failed. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 9 -> 3 (reason 0) Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): deactivating device (reason: 0). Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Policy set 'Auto SO-GUEST' (wlan0) as default for IPv4 routing and DNS. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Policy set 'Auto SO-GUEST' (wlan0) as default for IPv4 routing and DNS.

    Read the article

  • Suggestions for Summer Intern Application Assignments

    - by orangepips
    As part of our application process we want prospective college interns to complete an assignment on their own - either programming or analytical - to give us something tangible to evaluate such as code or a flowchart. I have two ideas for these assignments, one programming and one analytical, I am interested in gathering feedback about these. Programming Assignment Generate an a month's calendar for a given date. The first row should indicate the days of the week (e.g. Sunday - Saturday). Each subsequent row should contain a week's days. The date supplied should be highlighted (e.g. bolded). I am thinking we'll probably proscribe the output format even more strictly - probably down to what the HTML source should look like including CSS classes. Thinking is this forces answerers to actually do some work if they merely copy a solution from the internet. Analytical Assignment Diagram or describe in prose a system for managing a set of traffic lights for traffic at a four way intersection. Each direction (i.e. North, South, East and West) has two lanes (i.e. right and left). The left lane is turn only and has green arrow light to indicate right of way. The system is able to detect if lanes have cars in them and change the lights accordingly. I would expect a flow chart or some prose describing a finite state machine that deals with each contingency. This would hopefully provide some indication of the applicant's ability to reason through a logic problem of sorts and articulate an approach for solving. Areas Seeking Feedback Is it unreasonable to ask this of applicants? If not, is it better to request before or after a phone screen? Are these questions too hard or easy for a collegiate audience? Any suggestions for alternate questions? Do these seem like good tools for analyzing people who would part of a software development life cycle? Programming language suggestions - I'm thinking Java, Python and/or C# (we're actually a ColdFusion shop).

    Read the article

  • Microsoft and Application Architectures

    Microsoft has dealt with several kinds of application architectures to include but not limited to desktop applications, web applications, operating systems, relational database systems, windows services, and web services. Because of the size and market share of Microsoft, virtually every modern language works with or around a Microsoft product. Some of the languages include: Visual Basic, VB.Net, C#, C++, C, ASP.net, ASP, HTML, CSS, JavaScript, Java and XML. From my experience, Microsoft strives to maintain an n-tier application standard where an application is comprised of multiple layers that perform specific functions, for example: presentation layer, business layer, data access layer are three general layers that just about every formally structured application contains. The presentation layer contains anything to do with displaying information to the screen and how it appears on the screen. The business layer is the middle man between the presentation layer and data access layer and transforms data from the data access layer in to useable information to be stored later or sent to an output device through the presentation layer. The data access layer does as its name implies, it allows the business layer to access data from a data source like MS SQL Server, XML, or another data source. One of my favorite technologies that Microsoft has come out with recently is the .Net Framework. This framework allows developers to code an application in multiple languages and compiles them in to one intermediate language called the Common Language Runtime (CLR). This allows VB and C# developers to work seamlessly together as if they were working in the same project. The only real disadvantage to using the .Net Framework is that it only natively runs on Microsoft operating systems. However, Microsoft does control a majority of the operating systems currently installed on modern computers and servers, especially with personal home computers. Given that the Microsoft .Net Framework is so flexible it is an ideal for business to develop applications around it as long as they wanted to commit to using Microsoft technologies and operating systems in the future. I have been a professional developer for about 9+ years now and have seen the .net framework work flawlessly in just about every instance I have used it. In addition, I have used it to develop web applications, mobile phone applications, desktop applications, web service applications, and windows service applications to name a few.

    Read the article

  • Techniques to re-factor garbage and maintain sanity?

    - by Incognito
    So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red-hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the "functional integrity" of the code (meaning my "improvements" don't break it), or a resource online with common "bad patterns" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) { //ALWAYS a cPostBack cIsPostBack = true; SetPostBack("1"); string inCreate ="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"; parseValues = new string []{"","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","",""}; if (!cIsPostBack) { //....... //.... //.... if (!cIsPostBack) { } else { } //.... //.... strHPhone = StringFormat(s1.Trim()); s1 = parseValues[18].Replace(encStr," "); strWPhone = StringFormat(s1.Trim()); s1 = parseValues[11].Replace(encStr," "); strWExt = StringFormat(s1.Trim()); s1 = parseValues[21].Replace(encStr," "); strMPhone = StringFormat(s1.Trim()); s1 = parseValues[19].Replace(encStr," "); //(hundreds of lines of this) //.... //.... SQL = "...... lots of SQL .... "; SqlCommand curCommand; curCommand = new SqlCommand(); curCommand.Connection = conn1; curCommand.CommandText = SQL; try { curCommand.ExecuteNonQuery(); } catch {} //.... } I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit,

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • Once installed geos library (C++, and C), and then trying to install rgeos package (R), it reports geos-config missing!

    - by user1873888
    Knowing that the package rgeos, from the R language, requieres a prior installation of geos libraries, I installed, both, libgeos and libgeos-c1 (3.2.2), using the synaptic installer in my Ubuntu 12.04 (32 bit) machine. Then I tried to install rgeos directly from the R console, and it issued a message in the sense that geos-config was not found. The output is as follows: > install.packages("rgeos") Installing package(s) into ‘/home/checo/R/i486-pc-linux-gnu-library/2.15’ (as ‘lib’ is unspecified) also installing the dependency ‘sp’ probando la URL 'http://cran.rstudio.com/src/contrib/sp_1.0-9.tar.gz' Content type 'application/x-gzip' length 882102 bytes (861 Kb) URL abierta ================================================== downloaded 861 Kb probando la URL 'http://cran.rstudio.com/src/contrib/rgeos_0.2-19.tar.gz' Content type 'application/x-gzip' length 221471 bytes (216 Kb) URL abierta ================================================== downloaded 216 Kb * installing *source* package ‘sp’ ... ** package ‘sp’ successfully unpacked and MD5 sums checked ** libs gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c R centroid.c -o Rcentroid.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c gcdist.c -o gcdist.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c init.c -o init.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c pip.c -o pip.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c pip2.c -o pip2.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c sp_xports.c -o sp_xports.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c surfaceArea.c -o surfaceArea.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c zerodist.c -o zerodist.o gcc -std=gnu99 -shared -o sp.so Rcentroid.o gcdist.o init.o pip.o pip2.o sp_xports.o surfaceArea.o zerodist.o -L/usr/lib/R/lib -lR installing to /home/checo/R/i486-pc-linux-gnu-library/2.15/sp/libs ** R ** data ** demo ** inst ** preparing package for lazy loading ** help *** installing help indices ** building package indices ** installing vignettes ‘intro_sp.Rnw’ ‘over.Rnw’ ** testing if installed package can be loaded * DONE (sp) * installing *source* package ‘rgeos’ ... ** package ‘rgeos’ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgeos: 0.2-17 checking for /usr/bin/svnversion... no configure: svn revision: 394 checking geos-config usability... ./configure: line 1385: geos-config: command not found no configure: error: geos-config not usable ERROR: configuration failed for package ‘rgeos’ * removing ‘/home/checo/R/i486-pc-linux-gnu-library/2.15/rgeos’ Warning in install.packages : installation of package ‘rgeos’ had non-zero exit status Forgive my ignorance, but I don't know where this file, "geos-config", comes from: should it be generated by the gcc compilations above, or should it be previously installed when the libgeos libraries were intalled? I learnt, from another machine, that "geos-config" is an executable and that it should be installed in /usr/bin. Do you have any idea on what's wrong with my procedure? Thanks, -Sergio.

    Read the article

  • My father wants to learn PHP-MySQL to port his application. What I should do to help?

    - by adijiwa
    My father is a doctor/physician. About 15 years ago he started writing an application to handle his patient's medical records in his clinic at home. The app has the ability to input patient's medical records (obviously), search patients by some criteria, manage medicine stocks, output receipt to printer, and some more CRUDs. He wrote it in dBase III+. A few years later he migrated to FoxPro 2.6 for DOS and finally in a few more years he rewrote his app in Visual FoxPro 9. And now (actually two years ago) he wants to rewrite it in PHP, but he don't know how. The Visual FoxPro version of this app is still running and has no serious problem except sometimes it performs slowly. Usually there are 1-5 concurrent users. The binary and database files are shared via windows share. He did all the coding as a hobby and for free (it is for his own clinic after all). He also use this app in two other offices he managed. Some reasons of why he wants to rewrite in PHP-MySQL: He wants to learn Easier to deploy (?) Easier client setup, need only a browser What should I do to help my father? How should he start? I explored some options: I let my father learn PHP and MySQL (and HTML (and JavaScript?)) from scratch. I create/bundle framework. I'm thinking on bundling CodeIgniter and a web UI framework (any suggestion?) especially to reduce effort on writing presentation codes. What do you think? tl;dr My father (a doctor) wants to rewrite his Visual FoxPro app in PHP-MySQL. He knows very little of PHP and MySQL but he wants to learn. What should I do to help? How should he start? Some facts: My father is 50 years old. His first encounter with a PC is in early 1980s. It was IBM PC with Intel 8088. He knows BASIC. He taught me how to use DOS and how to program with BASIC. The other language he knows fairly well is dBase/FoxPro. I got my bachelor CS degree last year. I know the internals of my father's app because sometime he wants me to help him writing his app. Sorry for my english.

    Read the article

  • Moved to SSD and now getting "the disk drive for / is not ready yet"

    - by dmt0
    I moved my Ubuntu 12.04 install over to an SSD drive. Copied all directories except for the ones most often written to - var, tmp, ... Reinstalled grub into SSD by booting with live CD and following the commands in this post: How to move Ubuntu to an SSD This seemed to work fine, because when I press "e" in grub menu, I see the expected UUIDs. But right after grub I get could not log bootup: Address already in use the disk drive for / is not ready yet or not present. If I skip, I get same for /tmp /run, and other dirs If I go into manual recovery and do mount -n -o remount,rw / it turns out that everything can mount no problem. Can't get my head around this one. My fstab seems right. grub is right. AHCI in bios is enabled. Why is this happening? What can I do to fix it? When I do drop into shell from this error and get to mount things manually, how do I get the OS to continue loading? Thank you guys for any ideas you can give me. Here's what my fstab looks like right now: # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 UUID=67fc8a7a-f1db-485c-88bd-e007c214244f / ext4 defaults,noatime,discard 0 1 # swap was on /dev/sda3 during installation UUID=6bc9cd6c-46b7-43a0-bfac-bd04cc26cfb6 none swap sw 0 0 UUID=7397729b-2125-4b1d-b5eb-28866898d773 /hdd ext4 errors=remount-ro 0 1 /hdd/home /home none bind 0 0 /hdd/run /run none bind 0 0 /hdd/tmp /tmp none bind 0 0 /hdd/var /var none bind 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 output from blkid: /dev/sda1: LABEL="System Reserved" UUID="EABC56C1BC568849" TYPE="ntfs" /dev/sda2: UUID="7CCC6124CC60D9C2" TYPE="ntfs" /dev/sda3: UUID="6bc9cd6c-46b7-43a0-bfac-bd04cc26cfb6" TYPE="swap" /dev/sda5: UUID="7397729b-2125-4b1d-b5eb-28866898d773" TYPE="ext4" /dev/sdb1: UUID="67fc8a7a-f1db-485c-88bd-e007c214244f" TYPE="ext4" relevant from fdisk -l: Device Boot Start End Blocks Id System /dev/sdb1 2048 115345407 57671680 83 Linux

    Read the article

  • How can I diagnose/debug "maximum number of clients reached" X errors?

    - by jmtd
    Hi, I'm hitting a problem whereby X prevents processes from creating windows, uttering something like the following into ~/.xsession-errors: cannot open display: :0.0 Maximum number of clients reached Searching around there are lots of examples of people facing this problem, and sometimes people identify which program they are running is using up all the client slots. See e.g. LP 70872 (Firefox), LP 263211 (gnome-screensaver). For what it's worth, I run gnome-terminal, thunderbird, chromium-browser, empathy, tomboy and virtualbox nearly all the time, on top of the normal stuff you get with the GNOME desktop, and occasionally some other bits and pieces. However my question is not "which of my programs is causing this problem" but rather, how can one go about diagnosing this problem? In the above (and other) bugs, forum reports, etc., a number of tools are suggested: xlsclients - lists the client applications for the given display, but I don't think that corresponds to 'X clients' xrestop - a top-style X resources tool, one row per X client. Lots of '' clients, not shown in xlsclients output xwininfo -root -children lists X window objects From what I can gather, the problem might not be too many clients at all, but rather resources kept around in the X server for clients who have long-since detached. But it would also appear that you cannot (easily?) relate X resources back to their client. Can one effectively diagnose this issue once it has started to occur, or is a tedious divide-and-conquer approach for the apps I run the only approach open to me? Update Jan 2011: I think I have resolved this issue. For the benefit of anyone stumbling across this, nautilus and/or compiz or something in that chain of software was segfaulting due to a wallpaper I had. I had chosen an XML file as my wallpaper, which defined a rotating gallery of images. It was hand-made, but based on /usr/share/backgrounds/contest/background-1.xml or similar. Disabling the wallpaper and I have not had a crash since. I'm not marking this as answered yet, since the actual specific problem was not my question, but how to diagnose it was. Unfortunately this was mostly trial-and-error which sucks.

    Read the article

  • Oracle HRMS API – Update Employee Address

    - by PRajkumar
    API - hr_person_address_api.update_person_address   Example -- Consider Employee having Address Line1 -- "50 Main Street" Lets Update Address Line1 -- "60 Main Street" using update address API       DECLARE       ln_address_id                         PER_ADDRESSES.ADDRESS_ID%TYPE;       ln_object_version_number  PER_ADDRESSES.OBJECT_VERSION_NUMBER%TYPE := 1; BEGIN    -- Update Employee Address    -- ----------------------------------------     hr_person_address_api.update_person_address     (    -- Input data elements          -- -----------------------------          p_effective_date                     => TO_DATE('10-JUN-2011'),          p_address_id                          => 16406,          p_address_line1                    => '60 Main Street',          -- Output data elements          -- --------------------------------          p_object_version_number   => ln_object_version_number     );    COMMIT; EXCEPTION       WHEN OTHERS THEN                  ROLLBACK;                  dbms_output.put_line(SQLERRM); END; / SHOW ERR;      

    Read the article

  • Small script to look for Project Replication actions that have failed

    - by Trond Strømme
    Today when looking at a couple of projects on a ZFS 7320 Storage Appliance I noticed on one project that one of its replication actions had failed, as I hadn't checked the Recent Alerts log yet I was not aware of this. I decided to write a small script to check if there were others that had failed. Nothing fancy, just a loop through all projects, look at the project's replication child and compare the values of the last_sync and last_try properties and print the result if they're not equal. (There are probably more sensible ways of doing this, but at least it involves me getting the chance to put on my headphones and doing just a little bit of coding.) script // this script will locate failed project level replication // it will look at the sync times for 'last_sync' and 'last_try' // and compare these, if they deviate you should investigate. // NOTE! this code is offered 'as is' Run at your own risk, // it will probably work as intended, but in now way can I // (or Oracle) be held responsible if your server starts behaving // like a three year old kid in a candy store.. (not that mine do, // they are very well behaved boys...) run('configuration'); run('storage'); printf('Host: %s, pool: %s\n', get('owner'),get('pool')); run('cd /'); run('shares'); proj=list(); printf("total projects: %d\n",proj.length +'\n'); // just for project level replication for(i=0;i<proj.length;i++){ run('select '+proj[i]); run('replication'); //get all replication actions preps = list(); for(j=0;j<preps.length;j++){ run('select ' + preps[j]); last_sync = get('last_sync'); last_try = get('last_try'); // printf("target %s\n", get('target')); //why the flip does this not get the proper name? if(!( last_sync.valueOf() === last_try.valueOf())){ printf("sync has failed for %s %s\n", proj[i], get('target')); }else{ // printf("OK %s %s\n", proj[i], get('target')); } run('done'); //done with the replica action } run('done'); run('done'); } printf("finished\n"); For a more on how to run the script, or testing it please look at my previous post. Sample output: Host: elb1sn01, pool: exalogic total projects: 45 sync has failed for ACSExalogicSystem cb3a24fe-ad60-c90f-d15d-adaafd595639 finished

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >