Search Results

Search found 16894 results on 676 pages for 'block device'.

Page 125/676 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Delph Exception handling problem with multiple Exception handling blocks

    - by Robert Oschler
    I'm using Delphi Pro 6 on Windows XP with FastMM 4.92 and the JEDI JVCL 3.0. Given the code below, I'm having the following problem: only the first exception handling block gets a valid instance of E. The other blocks match properly with the class of the Exception being raised, but E is unassigned (nil). For example, given the current order of the exception handling blocks when I raise an E1 the block for E1 matches and E is a valid object instance. However, if I try to raise an E2, that block does match, but E is unassigned (nil). If I move the E2 catching block to the top of the ordering and raise an E1, then when the E1 block matches E is is now unassigned. With this new ordering if I raise an E2, E is properly assigned when it wasn't when the E2 block was not the first block in the ordering. Note I tried this case with a bare-bones project consisting of just a single Delphi form. Am I doing something really silly here or is something really wrong? Thanks, Robert type E1 = class(EAbort) end; E2 = class(EAbort) end; procedure TForm1.Button1Click(Sender: TObject); begin try raise E1.Create('hello'); except On E: E1 do begin OutputDebugString('E1'); end; On E: E2 do begin OutputDebugString('E2'); end; On E: Exception do begin OutputDebugString('E(all)'); end; end; // try() end;

    Read the article

  • How do DP and CC change in Piet?

    - by Paul Butcher
    According to the specification, Black colour blocks and the edges of the program restrict program flow. If the Piet interpreter attempts to move into a black block or off an edge, it is stopped and the CC is toggled. The interpreter then attempts to move from its current block again. If it fails a second time, the DP is moved clockwise one step. These attempts are repeated, with the CC and DP being changed between alternate attempts. If after eight attempts the interpreter cannot leave its current colour block, there is no way out and the program terminates. Unless I'm reading it incorrectly, this is at odds with the behaviour of the Fibonacci sequence example here: http://www.dangermouse.net/esoteric/piet/fibbig1.gif (from: http://www.dangermouse.net/esoteric/piet/samples.html) Specifically, why does the DP turn left at (0,3) ((0,0) being (top, left)) when it hits the left edge? At this point, both DP and CC are LEFT, so, by my reading, the sequence should then be: Attempt (and fail) to leave the block by going off the edge at (0,4), Toggle CC to RIGHT, Attempt (and fail) to leave the block by going off the edge at (0,2). Rotate DP to UP, Attempt (and succeed) to leave the block at (1,2) by entering the white block at (1,1) The behaviour indicated by the trace seems to be that DP gets rotated all the way, leaving CC at LEFT. What have I misunderstood?

    Read the article

  • What is weird about wrapping setjmp and longjmp?

    - by Max
    Hello. I am using setjmp and longjmp for the first time, and I ran across an issue that comes about when I wrap setjmp and longjmp. I boiled the code down to the following example: #include <stdio.h> #include <setjmp.h> jmp_buf jb; int mywrap_save() { int i = setjmp(jb); return i; } int mywrap_call() { longjmp(jb, 1); printf("this shouldn't appear\n"); } void example_wrap() { if (mywrap_save() == 0){ printf("wrap: try block\n"); mywrap_call(); } else { printf("wrap: catch block\n"); } } void example_non_wrap() { if (setjmp(jb) == 0){ printf("non_wrap: try block\n"); longjmp(jb, 1); } else { printf("non_wrap: catch block\n"); } } int main() { example_wrap(); example_non_wrap(); } Initially I thought example_wrap() and example_non_wrap() would behave the same. However, the result of running the program (GCC 4.4, Linux): wrap: try block non_wrap: try block non_wrap: catch block If I trace the program in gdb, I see that even though mywrap_save() returns 1, the else branch after returning is oddly ignored. Can anyone explain what is going on?

    Read the article

  • Running Hadoop example in psuedo-distributed mode on vm

    - by manas
    I have set-up Hadoop on a OpenSuse 11.2 VM using Virtualbox.I have made the prerequisite configs. I ran this example in the Standalone mode successfully. But in psuedo-distributed mode I get the following error: $./bin/hadoop fs -put conf input 10/04/13 15:56:25 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:25 INFO hdfs.DFSClient: Abandoning block blk_-8490915989783733314_1003 10/04/13 15:56:31 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:31 INFO hdfs.DFSClient: Abandoning block blk_-1740343312313498323_1003 10/04/13 15:56:37 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:37 INFO hdfs.DFSClient: Abandoning block blk_-3566235190507929459_1003 10/04/13 15:56:43 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:43 INFO hdfs.DFSClient: Abandoning block blk_-1746222418910980888_1003 10/04/13 15:56:49 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) 10/04/13 15:56:49 WARN hdfs.DFSClient: Error Recovery for block blk_-1746222418910980888_1003 bad datanode[0] nodes == null 10/04/13 15:56:49 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/max/input/core-site.xml" - Aborting... put: Protocol not available 10/04/13 15:56:49 ERROR hdfs.DFSClient: Exception closing file /user/max/input/core-site.xml : java.net.SocketException: Protocol not available java.net.SocketException: Protocol not available at sun.nio.ch.Net.getIntOption0(Native Method) at sun.nio.ch.Net.getIntOption(Net.java:178) at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) Any leads will be highly appreciated.

    Read the article

  • Java split xml file

    - by CC
    Hi all, I'm working on a piece of code to split files. I want to split flat file (that's ok, it is working fine) and xml file. The idea is to split based of a number of files to split: I have a file, and I want to split it in x files (x is a parameters). I'm doing the split by taking the size of the file and spliting the size by the number of files to split. Then, mysolution was to use a BufferedReader and to use it like while ((n = reader.read(buffer, 0, buffer.length)) != -1) { { The main problem is that for the xml file I cannot just split it, but I have to split it based on a block delimited by a start xml tag and end xml tag: <start tag> bla bla xml stuff </end tag> So I cannot cut a block at the middle. So if when I'm at the half of a block, is the size of my new file is greater than my max, I will have to read until the end of the tag, and then, to start a next file. The problem is that I have all sort of cases, and is a bit difficult to search the end tag. - the block reads a text until the middle of the end tag - the block reads a text until the end of the end tag, and no more other caracter after - etc and in the same time to have a loop and read the next block. Some times the end of a block concatenated with the start of the next one, I have the end xml tag. I hope you get the idea. My question is, does anyone have some algorithm that does that more accurate and who i treating all special cases ? The idea is to split the file as quickly as possible. Thanks alot.

    Read the article

  • iPhone 4.0 on iPhone but still Ad-Hoc compile for 3.1.3?

    - by Mark
    Device: Version : 3.1 Build: 3511 Device: iPhone OS: iPhone OS 4.0 xCode 3.2.2 (Old) xCode 3.2.3 (New; For iPhone 4.0 Beta) Background: As you can see I installed 4.0 on my iPhone as I read on this forum it's really hard to near impossible to downgrade back to 3.1.3, but it's my only device I have and use for development. When I try to continue to develop and build with the old xCode it tells me that "No provisioned iPhone OS device is connected". When I select Simulator it does compile and build, however when I spread this file it does not work on the devices of my testers, they get a Signed error. When I run the new xCode, it does compile and build on the Device and when I spread this file, it does work on the devices of my testers (which are running the current official version 3.1.3). Questions: Why is there a difference between building for Simulator and Device? A simulator build never seem to work on the devices of my testers because of signing issues and the build for device does work. Currently it seems the old xCode became useless, however I read that you may not use the Beta xCode to build your application for release. So knowing the above how am I able to pull this off with my current setup due the fact the old xCode won't let me build properly.

    Read the article

  • Modelling problem - Networked devices with commands

    - by Schneider
    I encountered a head scratching modelling problem today: We are modelling a physical control system composed of Devices and NetworkDevices. Any example of a Device is a TV. An example of a NetworkDevice is an IR transceiver with Ethernet connection. As you can see, to be able to control the TV over the internet we must connect the Device to the NetworkDevice. There is a one to many relationship between Device and NetworkDevice i.e. TV only has one NetworkDevice (the IR transceiver), but the IR transceiver may control many Devices (e.g. many TVs). So far no problem. The complicated bit is that every Device has a collection of Commands. The type of the Command (e.g IrCommand, SerialCommand - N.B. not currently modelled) depends on the type of NetworkDevice that the Device is connected to. In the current legacy system the Device has a collection of generic Commands (no typing) where fields are "interpreted" depending on the NetworkDevice type. How do I go about modelling this in OOP such that: You can only ever add a Command of the appropriate type, given the NetworkDevice the Device is attached to? If I change the NetworkDevice the Commands collection changes to the appropriate type Make it so the API is simple/elegant/intuitive to use

    Read the article

  • Reuse Client java Socket in a Java Server

    - by user1394983
    I'm devoloping an Java server two control an android online game. It's possible save the client socket of myserversocket.accept() in a variable in Client class? This are very util because this way, server can communicate with client when server wants and no when client contact server. My actual code are: import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.net.ServerSocket; import java.net.Socket; import java.util.ArrayList; import java.util.UUID; import sal.app.shared.Packet; public class Server { private ArrayList<GameSession> games = new ArrayList<GameSession>(); private ArrayList<Client> pendent_clients = new ArrayList<Client>(); private Packet read_packet= new Packet(); private Packet sent_packet = new Packet(); private Socket clientSocket = null; public static void main(String[] args) throws ClassNotFoundException{ ServerSocket serverSocket = null; //DataInputStream dataInputStream = null; //DataOutputStream dataOutputStream = null; ObjectOutputStream oos=null; ObjectInputStream ois=null; Server myServer = new Server(); try { serverSocket = new ServerSocket(7777); System.out.println("Listening :7777"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } while(true){ try { myServer.clientSocket = new Socket(); myServer.clientSocket = serverSocket.accept(); myServer.read_packet = new Packet(); myServer.sent_packet = new Packet(); oos = new ObjectOutputStream(myServer.clientSocket.getOutputStream()); ois = new ObjectInputStream(myServer.clientSocket.getInputStream()); //dataInputStream = new DataInputStream(clientSocket.getInputStream()); //dataOutputStream = new DataOutputStream(clientSocket.getOutputStream()); //System.out.println("ip: " + clientSocket.getInetAddress()); //System.out.println("message: " + ois.read()); //dataOutputStream.writeUTF("Hello!"); /*while ((myServer.read_packet = (Packet) ois.readObject()) != null) { myServer.handlePacket(myServer.read_packet); break; }*/ myServer.read_packet=(Packet) ois.readObject(); myServer.handlePacket(myServer.read_packet); //oos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally{ if( myServer.clientSocket!= null){ /*try { //myServer.clientSocket.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ } /*if( ois!= null){ try { ois.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if( oos!= null){ try { oos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } }*/ } } } public void handlePacket(Packet hp) throws IOException { if(hp.getOpCode() == 1) { registPlayer(hp); } } public void registPlayer(Packet p) throws IOException { Client registClient = new Client(this.clientSocket); this.pendent_clients.add(registClient); if(pendent_clients.size() == 2) { initAGame(); } else { ObjectOutputStream out=null; Packet to_send = new Packet(); to_send.setOpCode(4); out = new ObjectOutputStream(registClient.getClientSocket().getOutputStream()); out.writeObject(to_send); } } public void initAGame() throws IOException { Client c1 = pendent_clients.get(0); Client c2 = pendent_clients.get(1); Packet to_send = new Packet(); ObjectOutputStream out=null; GameSession incomingGame = new GameSession(c1,c2); games.add(incomingGame); to_send.setGameId(incomingGame.getGameId()); to_send.setOpCode(5); out = new ObjectOutputStream(c1.getClientSocket().getOutputStream()); out.writeObject(to_send); out = new ObjectOutputStream(c2.getClientSocket().getOutputStream()); out.writeObject(to_send); pendent_clients.clear(); } public Client getClientById(UUID given_id) { for(GameSession gs: games) { if(gs.getClient1().getClientId().equals(given_id)) { return gs.getClient1(); } else if(gs.getClient2().getClientId().equals(given_id)) { return gs.getClient2(); } } return null; } } With this code i got this erros: java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1847) at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1756) at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1257) at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1211) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1395) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158) at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1547) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:333) at Server.initAGame(Server.java:146) at Server.registPlayer(Server.java:120) at Server.handlePacket(Server.java:106) at Server.main(Server.java:63) This error ocurre when second client connect and server try to send an Packet to previous client 1 in function initGame() in this code: out = new ObjectOutputStream(c1.getClientSocket().getOutputStream()); out.writeObject(to_send); my android code is this: package sal.app; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.net.Socket; import java.net.UnknownHostException; import sal.app.logic.DataBaseManager; import sal.app.shared.Packet; import android.app.Activity; import android.os.Bundle; import android.view.Window; import android.view.WindowManager; public class MultiPlayerWaitActivity extends Activity{ private DataBaseManager db; public void onCreate(Bundle savedInstanceState) { super.requestWindowFeature(Window.FEATURE_NO_TITLE); super.getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,WindowManager.LayoutParams.FLAG_FULLSCREEN); super.onCreate(savedInstanceState); setContentView(R.layout.multiwaitlayout); db=DataBaseManager.getSalDatabase(this); db.teste(); try { db.createDataBase(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } Socket socket = null; ObjectOutputStream outputStream = null; ObjectInputStream inputStream = null; //System.out.println("dadadad"); try { socket = new Socket("192.168.1.4", 7777); //Game = new MultiPlayerGame(new ServerManager("192.168.1.66"),new Session(), new Player("")); outputStream = new ObjectOutputStream(socket.getOutputStream()); inputStream = new ObjectInputStream(socket.getInputStream()); //dataOutputStream.writeUTF(textOut.getText().toString()); //textIn.setText(dataInputStream.readUTF()); Packet p = new Packet(); Packet r = new Packet(); p.setOpCode(1); outputStream.writeObject(p); /*try { r=(Packet)inputStream.readObject(); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ //while(true){ //dataInputStream = new DataInputStream(clientSocket.getInputStream()); //dataOutputStream = new DataOutputStream(clientSocket.getOutputStream()); //System.out.println("ip: " + clientSocket.getInetAddress()); //System.out.println("message: " + ois.read()); //dataOutputStream.writeUTF("Hello!"); /*while ((r= (Packet) inputStream.readObject()) != null) { handPacket(r); break; }*/ r=(Packet) inputStream.readObject(); handPacket(r); //oos.close(); //} /*System.out.println(r.getOpCode()); if(r.getOpCode() == 5) { this.finish(); }*/ } catch (UnknownHostException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } /*finally{ if (socket != null){ try { socket.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if (outputStream != null){ try { outputStream.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if (inputStream != null){ try { inputStream.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }*/ //catch (ClassNotFoundException e) { // TODO Auto-generated catch block //e.printStackTrace(); //} catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void handPacket(Packet hp) { if(hp.getOpCode() == 5) { this.finish(); } this.finish(); } } Regards

    Read the article

  • JMF microphone volume controller

    - by TacB0sS
    How to obtain the Microphone volume controller in JMF? this is what I have: I tried this implementation concept of yours, but I keep getting a null from the first volume processor when I try to get the stream, here is how I do it: // the device is the media device specifically audio Processor processorForVolume = Manager.createProcessor(device.getLocator()); // wait until configured ProcessorStates newState = new ProcessorStateListener(Processor.Configured).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // setting the content descriptor to null - read in another thread this allows to get the gain control processorForVolume.setContentDescriptor(null); // set the track control format to one supported by the device and the track control. // I didn't match it to an RTP allowed format, but I don't think this has anything to do with it... TrackControl[] trackControls = processorForVolume.getTrackControls(); if (trackControls.length == 0) throw new MC_Exception("No track controls where found for this device:", new Object[]{device}); for (TrackControl control : trackControls) trackManipulator.manipulateTrackControls(control); // wait until the processor is realized newState = new ProcessorStateListener(Controller.Realized).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // receives the gain control micVolumeController = processorForVolume.getGainControl(); // cannot get the output stream to process further... any suggestions? processor = Manager.createProcessor(processorForVolume.getDataOutput()); new ProcessorStateListener(Processor.Configured).waitForProcessorState(processor); processor.setContentDescriptor(DeviceCapturingManager.RAW_RTP); new ProcessorStateListener(Controller.Realized).waitForProcessorState(processor); this is the output It generates: volumeProcessorState: Configured format set to track control - com.sun.media.ProcessEngine$ProcTControl@1627c16: LINEAR, 48000.0 Hz, 16-bit, Stereo, LittleEndian, Signed volumeProcessorState: Realized and the data output from the processor is Null. I should make clear that when the content descriptor != null I do get an output stream but not the volume controller, and the when it is null I get the controller, but no stream. I try to connect to an audio microphone device Adam.

    Read the article

  • How to create custom pages (Drupal 6.x)

    - by jc70
    In the template.php file I inserted the code below: I found a tutorial online that gives the code, but I'm confused on how to get it to work. I copied the code below and inserted it into the template.php from the theme HTML5_base. I duplicated the page.tpl.php file and created custom pages -- page-gallery.tpl.php and page-articles.tpl.php. I inserted some text to the files just see that I've navigated to the pages w/ the changes. It looks like Drupal is not recognizing gallery.tpl.php and page-articles.tpl.php. In the template.php there are the following functions: html5_base_preprocess_page() html5_base_preprocess_node() html5_base_preprocess_block() In the tutorial it uses these functions: phptemplate_preprocess_page() phptemplate_preprocess_block() phptemplate_preprocess_node() function phptemplate_preprocess_page(&$vars) { //code block from the Drupal handbook //the path module is required and must be activated if(module_exists('path')) { //gets the "clean" URL of the current page $alias = drupal_get_path_alias($_GET['q']); $suggestions = array(); $template_filename = 'page'; foreach(explode('/', $alias) as $path_part) { $template_filename = $template_filename.'-'.$path_part; $suggestions[] = $template_filename; } $vars['template_files'] = $suggestions; } } function phptemplate_preprocess_node(&$vars) { //default template suggestions for all nodes $vars['template_files'] = array(); $vars['template_files'][] = 'node'; //individual node being displayed if($vars['page']) { $vars['template_files'][] = 'node-page'; $vars['template_files'][] = 'node-'.$vars['node']->type.'-page'; $vars['template_files'][] = 'node-'.$vars['node']->nid.'-page'; } //multiple nodes being displayed on one page in either teaser //or full view else { //template suggestions for nodes in general $vars['template_files'][] = 'node-'.$vars['node']->type; $vars['template_files'][] = 'node-'.$vars['node']->nid; //template suggestions for nodes in teaser view //more granular control if($vars['teaser']) { $vars['template_files'][] = 'node-'.$vars['node']->type.'-teaser'; $vars['template_files'][] = 'node-'.$vars['node']->nid.'-teaser'; } } } function phptemplate_preprocess_block(&$vars) { //the "cleaned-up" block title to be used for suggestion file name $subject = str_replace(" ", "-", strtolower($vars['block']->subject)); $vars['template_files'] = array('block', 'block-'.$vars['block']->delta, 'block-'.$subject); }

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • WinUSB failing on non-development computers

    - by Giawa
    Good afternoon, WinUSB is working well on the development computer that I am using (Win XP SP3). I am able to download new firmware to the Cypress FX2, and then connect to the new USB device once it 'renumerates'. However, if I've tried the same code with the WinUSB driver on a few other computers (Win XP SP3, Win7 x64) and they both returned the error "A device attached to the system is not functioning." when trying to use CreateFile to get a handle to the USB device. The devicePath was found successfully, so I'm not sure why it cannot connect to the device. Furthermore, the device manager states that my device is working properly. I'm curious if I'm missing something when compiling the code? I would guess that my development computer has something installed on it that the other computers do not? Or perhaps it's a power setting and the device is going to sleep (although I've fooled around with the Power Options on each computer to no avail). Does anyone have any ideas? I've compiled under Visual Studio 2008, and have installed the Microsoft C++ 2008 Redistributable Package on the computers that I've tested on. Thanks, Giawa

    Read the article

  • Issue with transparent texture on 3D primitive, XNA 4.0

    - by Bevin
    I need to draw a large set of cubes, all with (possibly) unique textures on each side. Some of the textures also have parts of transparency. The cubes that are behind ones with transparent textures should show through the transparent texture. However, it seems that the order in which I draw the cubes decides if the transparency works or not, which is something I want to avoid. Look here: cubeEffect.CurrentTechnique = cubeEffect.Techniques["Textured"]; Block[] cubes = new Block[4]; cubes[0] = new Block(BlockType.leaves, new Vector3(0, 0, 3)); cubes[1] = new Block(BlockType.dirt, new Vector3(0, 1, 3)); cubes[2] = new Block(BlockType.log, new Vector3(0, 0, 4)); cubes[3] = new Block(BlockType.gold, new Vector3(0, 1, 4)); foreach(Block b in cubes) { b.shape.RenderShape(GraphicsDevice, cubeEffect); } This is the code in the Draw method. It produces this result: As you can see, the textures behind the leaf cube are not visible on the other side. When i reverse index 3 and 0 on in the array, I get this: It is clear that the order of drawing is affecting the cubes. I suspect it may have to do with the blend mode, but I have no idea where to start with that.

    Read the article

  • java.lang.NoClassDefFoundError thrown with my own packages in Android 1.5

    - by TiGer
    Hi, I have developed an application which has several packages within it's project... A class in one of those packages is called right away in the first line of code, which throws the dreaded java.lang.NoClassDefFoundError error... I don't get it, the package simply is within the project, and it works fine on my Android 1.6 device, but won't work with my 1.5 device... I do have to say that the project was originally set for 1.6, but then I changed the within the manifest from 4 to 3... Is that bad practice ? Or maybe it has nothing to do with the platform version ? Also I do get these lines as wel from the DDMS : 05-04 17:24:59.921: WARN/dalvikvm(2041): VFY: unable to resolve static field 2 (MANUFACTURER) in Landroid/os/Build; 05-04 17:24:59.921: WARN/dalvikvm(2041): VFY: rejecting opcode 0x62 at 0x0034 05-04 17:24:59.921: WARN/dalvikvm(2041): VFY: rejected Lmobilaria/android/managementModule/Management;.getDeviceSpecifics ()V 05-04 17:24:59.921: WARN/dalvikvm(2041): Verifier rejected class Lmobilaria/android/managementModule/Management; Thats the ManagementModule which also tries to retrieve several info-fields of the device itself... Again, this works just fine on the 1.6 device, even though thats a development device whilst my 1.5 device is a non-development device...

    Read the article

  • ESX3.5 Cluster & MD3000i -- Both servers see iSCSI Targets, Only one server can use partition.

    - by GruffTech
    Alright. First and foremost, Warning. This is a bigger-then-normal question. I like to be thorough and try to eliminate all possible "easymode" answers, as well as give everyone a feel of what i've tried. I've included several images of our setup and the problem it is having.. TLDR Version: So I've followed the guides located here: ESX Deployment Guide V1 this is the guide Dell has sent me to setup two ESX3.5 servers mounting a Dell MD3000i. It doesn't work. Both servers can't use the same storage partition on the MD3000. Both servers see it, but only one server can actually use it. (that server being whatever server created the partition on the target.) Both ESX servers are members of the Host Group. Full Version I have 2 ESX3.5 Servers (10.0.7.102, also called EPI2, and 10.0.7.103, also called EPI3.) connected to a iSCSI SAN Device (Dell MD3000i). Both ESX servers can "scan" the SAN and see the LUNS. Part One: MD3000i Storage On the MD3000i, Both servers are in my host group. I have two partitions, VM1 and VM2, both 1.6TB (vmware doesn't like anything past 2tb.) And you can even see that the ESX servers are targetting the MD3000 just fine. Part Two: The ESX Servers Figure 1. So as you can see above, Both ESX Servers (10.0.7.102 and 10.0.7.103) are able to see and scan the MD3000i SAN. Figure 2. Above is the storage both servers see. I created the storage partition on EPI2 (102). I then Extended the partition to include the second LUN for a grand total of 3.27 TB of storage. When i "rescan" on 103 (the server not mounting the partition), I get the below log in log/messages. Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). being the only line that grabs my attentions. (EPI3 is the server name) Mar 11 10:41:04 epi3 vmkiscsid[5436]: Connected to Discovery Address 192.168.130.101 Mar 11 10:41:04 epi3 vmkiscsid[5437]: Connected to Discovery Address 192.168.130.102 Mar 11 10:41:04 epi3 vmkiscsid[5438]: Connected to Discovery Address 192.168.131.101 Mar 11 10:41:04 epi3 vmkiscsid[5439]: Connected to Discovery Address 192.168.131.102 Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 0 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdb : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Device id info for sdb: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0xa2 0x0 0x0 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Id for sdb 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0xa2 0x00 0x00 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:17 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: scan_scsis starting finish Mar 11 10:41:17 epi3 kernel: SCSI device sdb: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:17 epi3 kernel: sdb: sdb1 Mar 11 10:41:17 epi3 kernel: scan_scsis done with finish Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 1 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdc : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Device id info for sdc: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0x86 0x0 0x0 0xd 0xb7 0x4d 0x75 0xf2 0x77 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Id for sdc 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0x86 0x00 0x00 0x0d 0xb7 0x4d 0x75 0xf2 0x77 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:18 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: scan_scsis starting finish Mar 11 10:41:18 epi3 kernel: SCSI device sdc: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:18 epi3 kernel: sdc: sdc1 Mar 11 10:41:18 epi3 kernel: scan_scsis done with finish Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). Mar 11 10:41:18 epi3 kernel: scsi singledevice 1 0 0 0 Things I've Tried: Removing iSCSI targets from only 103, disabling iSCSI, rebooting, enabled iSCSI, re-adding targets, rescan. Same result. Removing partition on 102, Formatted partition on 103 instead. Same result, except flipped. 103 can use storage, 102 can not. Starting Over. Removing all iSCSI Targets on both ESX Boxes, disabling iSCSI, turning off the firewall for iSCSI, rebooting ESX. Then on the MD3000, Removed the Host Group, Removed the Host-to-Virtual Mappings, Restarted the SAN. Followed the Documentation again, same result. Both servers see the storage, but only one server can use it. Disabling and Re-enabling VMware DRS and HA. Same result. Flat-out turning off VMware DRS and HA, and doing the "start over" step to see if maybe that borked it. Same Result. I'm kinda loosing my mind here, Everything i read online says "just partition it and if the ESX boxes can see the targets, it just works".... well crap. Any ideas, any other things to try? Can anyone atleast point me in the right direction? I'm really tired of working from 1am til 4am (our maintenance hours)

    Read the article

  • How to stop registration attempts on Asterisk

    - by Travesty3
    The main question: My Asterisk logs are littered with messages like these: [2012-05-29 15:53:49] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:50] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:55] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:55] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:57] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device <sip:[email protected]>;tag=cb23fe53 [2012-05-29 15:53:57] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device <sip:[email protected]>;tag=cb23fe53 [2012-05-29 15:54:02] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:54:03] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 21:20:36] NOTICE[5578] chan_sip.c: Registration from '"55435217"<sip:[email protected]>' failed for '65.218.221.180' - No matching peer found [2012-05-29 21:20:36] NOTICE[5578] chan_sip.c: Registration from '"1731687005"<sip:[email protected]>' failed for '65.218.221.180' - No matching peer found [2012-05-30 01:18:58] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=dEBcOzUysX [2012-05-30 01:18:58] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=9zUari4Mve [2012-05-30 01:19:00] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=sOYgI1ItQn [2012-05-30 01:19:02] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=2EGLTzZSEi [2012-05-30 01:19:04] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=j0JfZoPcur [2012-05-30 01:19:06] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=Ra0DFDKggt [2012-05-30 01:19:08] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=rR7q7aTHEz [2012-05-30 01:19:10] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=VHUMtOpIvU [2012-05-30 01:19:12] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=JxZUzBnPMW I use Asterisk for an automated phone system. The only thing it does is receives incoming calls and executes a Perl script. No outgoing calls, no incoming calls to an actual phone, no phones registered with Asterisk. It seems like there should be an easy way to block all unauthorized registration attempts, but I have struggled with this for a long time. It seems like there should be a more effective way to prevent these attempts from even getting far enough to reach my Asterisk logs. Some setting I could turn on/off that doesn't allow registration attempts at all or something. Is there any way to do this? Also, am I correct in assuming that the "Registration from ..." messages are likely people attempting to get access to my Asterisk server (probably to make calls on my account)? And what's the difference between those messages and the "Sending fake auth rejection ..." messages? Further detail: I know that the "Registration from ..." lines are intruders attempting to get access to my Asterisk server. With Fail2Ban set up, these IPs are banned after 5 attempts (for some reason, one got 6 attempts, but w/e). But I have no idea what the "Sending fake auth rejection ..." messages mean or how to stop these potential intrusion attempts. As far as I can tell, they have never been successful (haven't seen any weird charges on my bills or anything). Here's what I have done: Set up hardware firewall rules as shown below. Here, xx.xx.xx.xx is the IP address of the server, yy.yy.yy.yy is the IP address of our facility, and aa.aa.aa.aa, bb.bb.bb.bb, and cc.cc.cc.cc are the IP addresses that our VoIP provider uses. Theoretically, ports 10000-20000 should only be accessible by those three IPs.+-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ | Order | Source Ip | Protocol | Direction | Action | Destination Ip | Destination Port | +-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ | 1 | cc.cc.cc.cc/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 2 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 80 | | 3 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2749 | | 4 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 443 | | 5 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 53 | | 6 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1981 | | 7 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1991 | | 8 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2001 | | 9 | yy.yy.yy.yy/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 137-138 | | 10 | yy.yy.yy.yy/255.255.255.255 | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 139 | | 11 | yy.yy.yy.yy/255.255.255.255 | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 445 | | 14 | aa.aa.aa.aa/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 17 | bb.bb.bb.bb/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 18 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1971 | | 19 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2739 | | 20 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1023-1050 | | 21 | any | all | inbound | deny | any on server | 1-65535 | +-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ Set up Fail2Ban. This is sort of working, but it's reactive instead of proactive, and doesn't seem to be blocking everything (like the "Sending fake auth rejection ..." messages). Set up rules in sip.conf to deny all except for my VoIP provider. Here is my sip.conf with almost all commented lines removed (to save space). Notice at the bottom is my attempt to deny all except for my VoIP provider:[general] context=default allowguest=no allowoverlap=no bindport=5060 bindaddr=0.0.0.0 srvlookup=yes disallow=all allow=g726 allow=ulaw allow=alaw allow=g726aal2 allow=adpcm allow=slin allow=lpc10 allow=speex allow=g726 insecure=invite alwaysauthreject=yes ;registertimeout=20 registerattempts=0 register = user:pass:[email protected]:5060/700 [mysipprovider] type=peer username=user fromuser=user secret=pass host=sip.mysipprovider.com fromdomain=sip.mysipprovider.com nat=no ;canreinvite=yes qualify=yes context=inbound-mysipprovider disallow=all allow=ulaw allow=alaw allow=gsm insecure=port,invite deny=0.0.0.0/0.0.0.0 permit=aa.aa.aa.aa/255.255.255.255 permit=bb.bb.bb.bb/255.255.255.255 permit=cc.cc.cc.cc/255.255.255.255

    Read the article

  • Dual NVidia graphics cards in Ubuntu / xorg.conf mania

    - by John Zwinck
    I have two NVidia graphics cards: Quadro NVS 295 (PCI Express, dual DisplayPort outputs) GeForce FX 5200 (PCI, DVI and VGA outputs) I have three identical monitors, two on DisplayPort and one on DVI. I'm on Ubuntu Hardy (and cannot currently dist-upgrade for separate reasons). I use the "nvidia" driver. What's new is the GeForce card and the third monitor. I currently have the dual DisplayPort monitors working fine. Here are the display-related parts of my xorg.conf: Section "ServerLayout" Identifier "Default Layout" Screen "PCI-Express Screen" 0 0 # adding this makes X fail to start: Screen "PCI Screen" 0 Inputdevice "Generic Keyboard" Inputdevice "Configured Mouse" EndSection Section "Module" Load "glx" # not sure why/if this is needed EndSection Section "Monitor" Identifier "DELL 2408WFP" Option "DPMS" EndSection Section "Device" Identifier "NVIDIA Quadro NVS 295" Driver "nvidia" Option "RenderAccel" "true" Screen 0 BusID "PCI:2:0:0" EndSection Section "Device" Identifier "NVIDIA GeForce FX 5200" Driver "nvidia" Option "RenderAccel" "true" Screen 1 BusID "PCI:6:4:0" EndSection Section "Screen" Identifier "PCI-Express Screen" Device "NVIDIA Quadro NVS 295" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+1200, 1920x1200 +0+0" EndSection Section "Screen" Identifier "PCI Screen" Device "NVIDIA GeForce FX 5200" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+0" EndSection I use nvidia-settings to configure my monitors, and it does not show the second GPU. lspci, though, shows: 02:00.0 VGA compatible controller: nVidia Corporation Unknown device 06fd 06:04.0 VGA compatible controller: nVidia Corporation NV34 [GeForce FX 5200] Which is where I got the BusID settings for the two devices (when I just had one device, I didn't have any BusID listed...and adding the BusID hasn't broken anything). What am I missing? How can I make nvidia-settings show my second GPU so I can then configure its monitor?

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • How can I set my bootloader to load my primary (C:) partition?

    - by acidzombie24
    I created 4 partitions and want to use them to have seperate Windows XP, Windows 7, (possibly) Windows Vista installations, and "WinDummy" (to test applications in Vista, XP or another OS). I used Norton Ghost to install an OS to the drive in about 3 minutes. My problem is that I installed the spare first on the 4th partition, then Windows 7 on the second. I tried to set the bootloader (with easybcd) to use the first partition - but it doesn't want to. Heres my debug screen on easybcd As you can see, the device is set to H and i cant figure out how to change it. I can make my bootloader use Windows 7 first, but I can't make it use my C: install of XP instead of my spare H:. How would I fix this? Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=H: description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {bc2d8409-8640-11de-aa7e-a477d86453c4} resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} displayorder {bc2d8409-8640-11de-aa7e-a477d86453c4} {bc2d8406-8640-11de-aa7e-a477d86453c4} {bc2d8404-8640-11de-aa7e-a477d86453c4} {466f5a88-0af2-4f76-9038-095b170dc21c} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 3 Real-mode Boot Sector --------------------- identifier {bc2d8409-8640-11de-aa7e-a477d86453c4} device partition=C: path \NTLDR description Windows XP Windows Boot Loader ------------------- identifier {bc2d8406-8640-11de-aa7e-a477d86453c4} device partition=D: path \Windows\system32\winload.exe description Windows 7 locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {bc2d8407-8640-11de-aa7e-a477d86453c4} recoveryenabled Yes osdevice partition=D: systemroot \Windows resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} nx OptIn Windows Boot Loader ------------------- identifier {bc2d8404-8640-11de-aa7e-a477d86453c4} device partition=E: path \Windows\system32\winload.exe description Blank osdevice partition=E: systemroot \Windows Windows Legacy OS Loader ------------------------ identifier {466f5a88-0af2-4f76-9038-095b170dc21c} device partition=H: path \ntldr description Windows XP Spare

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Controller Error: Do I need to worry?

    - by Kryten
    Hi, I have a HP Pavillion dv5224ea Laptop with Windows 7 on it. Recently I discovered a Error in Event Viewer: The driver detected a controller error on \Device\Ide\IdePort1. (more details): - System - Provider [ Name] atapi - EventID 11 [ Qualifiers] 49156 Level 2 Task 0 Keywords 0x80000000000000 - TimeCreated [ SystemTime] 2010-03-07T12:43:07.090197600Z EventRecordID 30198 Channel System Computer Alistair-Win7 Security - EventData \Device\Ide\IdePort1 0000100001000000000000000B0004C002000000850100C00000000000000000000000000000000000000000000000000000000004100000 -------------------------------------------------------------------------------- Binary data: In Words 0000: 00100000 00000001 00000000 C004000B 0008: 00000002 C0000185 00000000 00000000 0010: 00000000 00000000 00000000 00000000 0018: 00000000 00001004 In Bytes 0000: 00 00 10 00 01 00 00 00 ........ 0008: 00 00 00 00 0B 00 04 C0 .......À 0010: 02 00 00 00 85 01 00 C0 ......À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 00 00 00 00 ........ 0030: 00 00 00 00 04 10 00 00 ........ Event Viewer is recording A LOT of these errors (sometimes 13, one after the other!). Do I need to worry? What does this error mean? What device could "\Device\Ide\IdePort1" be? What is a ATAPI Error? Do I need to re-install Windows? I generally find the occurs when I try to backup my machine (using Windows Backup) or when using a program that uses Volume Shadow Copy. I have run "sfc", no problems. There are no Device Errors in Device Manager. I have also run "vssadmin list writers", no problems. Whats going on??? Would it be a good idea to re-install Windows 7?

    Read the article

  • Second ip address on same interface CentOS 6.3

    - by user16081
    I tried to add a second LAN addresses in CentOS 6.3 on a brand new install and it's not working. I installed a new copy of CentOS 5.7 and tried the same and it worked right away. Now I'm just trying to setup the alias on the same subnet and it's not working. what am i doing wrong, is this not possible on CentOS 6.3? second ip address on the same interface but on a different subnet CentOS 5.7 it works: DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:0C:29:01:6F:89 IPADDR=192.168.0.167 NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes DEVICE=eth0:0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:0C:29:01:6F:89 IPADDR=192.168.0.166 NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes On CentOS 6.3: does not work DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:0C:29:1E:DE:86 IPADDR=192.168.0.242 NETMASK=255.255.255.0 NETWORK=192.168.0.0 GATEWAY=192.168.0.1 ONBOOT=yes DNS1=205.134.232.138 DNS2=4.4.4.4 DEVICE=eth0:0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:0C:29:1E:DE:86 IPADDR=192.168.0.240 NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes # /etc/init.d/network restart Shutting down interface eth0: Device state: 3 (disconnected) [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK Bringing up interface eth0: Active connection state: activated Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/3 [ OK ] # ping 192.168.0.240 PING 192.168.0.240 (192.168.0.240) 56(84) bytes of data. From 192.168.0.242 icmp_seq=2 Destination Host Unreachable Appreciate any advice, thanks Update: Perhaps this is relevant? On CentOS 5.7: # dmesg |grep eth eth0: registered as PCnet/PCI II 79C970A eth0: link up eth0: link up On 6.3: # dmesg | grep eth e1000 0000:02:00.0: eth0: (PCI:66MHz:32-bit) 00:0c:29:1e:de:86 e1000 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None 8021q: adding VLAN 0 to HW filter on device eth0 eth0: no IPv6 routers present

    Read the article

  • How do you monitor SSD wear in Windows when the drives are presented as 'generic' devices?

    - by MikeyB
    Under Linux, we can monitor SSD wear fairly easily with smartmontools whether the drive is presented as a normal block device or a generic device (which happens when the drive has been hardware RAIDed by certain controllers such as the one on the IBM HS22). How can we do the equivalent under Windows? Does anyone actually use smartmontools? Or are there other packages out there? The problem is that SCSI Generic devices just don't show up in Windows. If the drives aren't RAIDed we can see them fine. How I'd do it in Linux: sles11-live:~ # lsscsi -g [1:0:0:0] disk SMART USB-IBM 8989 /dev/sda /dev/sg0 [2:0:0:0] disk ATA MTFDDAK256MAR-1K MA44 - /dev/sg1 [2:0:1:0] disk ATA MTFDDAK256MAR-1K MA44 - /dev/sg2 [2:1:8:0] disk LSILOGIC Logical Volume 3000 /dev/sdb /dev/sg3 sles11-live:~ # smartctl -l ssd /dev/sg1 smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32.49-0.3-default] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 26~ Percentage Used Endurance Indicator |_ ~ normalized value sles11-live:~ # smartctl -l ssd /dev/sg2 smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32.49-0.3-default] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 3~ Percentage Used Endurance Indicator |_ ~ normalized value

    Read the article

  • Debian network bridge configuration - /etc/network/interfaces

    - by Mathias
    I'm running a Lenny Xen dom0 hosting multiple virtual machines in a routed IP setup. To get an additional private subnet, I created the bridge xenbr0 in the dom0 with the following commands: brctl addbr xenbr0 ifconfig xenbr0 10.0.0.1 netmask 255.255.255.0 ifconfig xenbr0 up This works as expected, and domU interfaces are added to the bridge by Xen on VM start. My only problem is: how the heck do i specify this configuration in /etc/network/interfaces that it remains permanent and the bridge is available after a reboot? I tried the following config as found on a lot of tutorials: auto xenbr0 iface xenbr0 inet static address 10.0.0.1 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 bridge_stp no I get 2 different errors, depending on if the bridge already exists or not. If it doesn't exist: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). SIOCSIFADDR: No such device xenbr0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device SIOCSIFBRDADDR: No such device xenbr0: ERROR while getting interface flags: No such device xenbr0: ERROR while getting interface flags: No such device Failed to bring up xenbr0. done. And if it exists: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.000000000000 no root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). RTNETLINK answers: File exists Failed to bring up xenbr0. done. Could anyone point me in the right direction please? The bridge works fine when created manually, i just need the right config file entries. The most tutorials I found add some devices to the bridge in the config, is that maybe the problem why it is not working? I don't have any interfaces I want to add to the bridge on creation as they get added later on VM start... Thanks, Mathias

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >