Search Results

Search found 6815 results on 273 pages for 'micro sd card'.

Page 50/273 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • Why does installing NVidia 9600GT graphics card, take 1GB of RAM away from Windows?

    - by Nick G
    Hi, I've changed graphics cards in my PC and now Windows 7 (32bit) is reporting that I have a whole gigabyte less physical RAM in my PC. Why is this? Firstly, the machine has 4GB of physical RAM. The old card was an ATI 2600XT with 256MB and the new card is an NVidia 9600GT with 512MB. With the ATI card windows sees 3326MB. With the NVidia card, windows sees 2558MB. I realise that due to address space restrictions I will not see all 4GB with 32bit windows, but why is there such a massive loss of RAM when simply changing cards (bearing in mind BOTH cards have their own RAM and borrow no main memory like some built on chipsets do). Would using 64 bit windows solve this? Thanks Nick.

    Read the article

  • question about that concept of linux install in flash card.

    - by Johnny
    1.i can't understand why Linux on flash card need install, does it simply copy certain file to certain location in flash card? ---i mean ,plan it in a response file,then one program read the plan in response file and write certain format to flash card. 2.does the file system bind tightly to the linux kernel? is it possible let each kernel,user,app have its own root? rather than mount everything under one single / "root"

    Read the article

  • How do I boot [embedded] linux from sd card?

    - by Brandon Yates
    I am hacking together a quick embedded linux system on a DM816x evm board. Previously I have been using TFTP and NFS to load my kernel and root filesystem to the board. I am now trying to switch over to loading everything from an SD card. I have my card partitioned such that uBoot and my kernel image are in one partition, and my rootFS in another partition. At power-on, Uboot starts correctly and successfully launches the kernel. However, the kernel is unable to mount the root file system. It appears that it doesn't recognize any SD (mmc) cards. It gives this error message. VFS: Cannot open root device "mmcblk0p2" or unknown-block(2,0) Please append a correct "root=" boot option; here are the available partitions: 1f00 256 mtdblock0 (driver?) 1f01 8 mtdblock1 (driver?) 1f02 2560 mtdblock2 (driver?) 1f03 1272 mtdblock3 (driver?) 1f04 2432 mtdblock4 (driver?) 1f05 128 mtdblock5 (driver?) 1f06 4352 mtdblock6 (driver?) 1f07 204928 mtdblock7 (driver?) 1f08 50304 mtdblock8 (driver?) Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0) I feel like I'm missing something fundamental here. Why does it not recognize the root device I am trying to load from? Here is my uBoot boot script that is running: setenv bootargs console=ttyO2,115200n8 root=/dev/mmcblk0p2 rw mem=124M earlyprink vram=50M ti816xfb.vram=0:16M,1:16M,2:6M ip=off noinitrd;mmc init;fatload mmc 1 0x80009000 uImage;bootm 0x80009000

    Read the article

  • What do I need to use Smart card for windows login (no domain, just regular single local machine)

    - by Muhammad Nour
    I have a reader from ACS "ACR83" and a brand new card from the same place ACO3-32 as a development kit and I need to use both of them to login into my laptop locally I am not a part of domain, I am using Windows xp SP3 what should I do to enable smart card login do I need third party software to do this without domain or should it be a domain environment to be able to do such a thing this is the first time dealing with smart card, so I hope some one will help me Doing this

    Read the article

  • Radeon 4670 Card Has 2 DVI Outs But Only Works When One is used in Win7 x64

    - by ssmith
    I have an ATI XFX Radeon 4670 1GB video card in a Windows 7 Ultimate x64 setup. If I only use one monitor, all is well. If I plug in a second DVI cable to the card, both monitors go dark. I can get one or the other to display, but only when it is the only one plugged into the card. I'm really hoping for a dual-monitor setup here as that was the point of the card. I'm using ATI Radeon HD 4600 Series video drivers dated 11/24/09 (8.681.0.0) which are the latest from their site and are supposed to work with Vista or Win7 x64. When the computer boots with both displays connected, both monitors display the inital bios and boot screens. The Win7 splash screen also displays, but by the time it goes to the login screen, one monitor is offline.

    Read the article

  • Can not mount my USB disk-- Ubuntu nor windows[dmesg including]

    - by EthanZ6174
    first, here is my dmesn|tail result right after i plugged the disk: $ dmesg | tail [ 2578.697224] scsi 6:0:0:0: Direct-Access HP v100w PMAP PQ: 0 ANSI: 0 CCS [ 2578.698322] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 2578.916464] sd 6:0:0:0: [sdb] 3921920 512-byte logical blocks: (2.00 GB/1.87 GiB) [ 2578.916950] sd 6:0:0:0: [sdb] Write Protect is off [ 2578.916956] sd 6:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 2578.916961] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.922460] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.922470] sdb: [ 2578.969570] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.969578] sd 6:0:0:0: [sdb] Attached SCSI removable disk there is nothing after 'sdb:' ... at the meantime, the lsusb shows: $ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 004: ID 03f0:3207 Hewlett-Packard Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 002: ID 045e:0737 Microsoft Corp. Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub so... can anyone help me? what's wrong with my USB disk? THX

    Read the article

  • How many megabytes per second may I expect from a gigabit USB 2.0 network card under linux?

    - by Nakedible
    I'm interested in the actual real-word throughput attainable with an external 1000BaseT USB 2.0 network card under Linux. I have been able to attain 90 megabytes per second on a PCI-E interface, but the USB 2.0 bus has a theoretical limit of 480Mbit/s, and in practice less than 40 megabytes per second. Is the actual throughput attainable with such a card under linux 40, 30, 20, or even as low as 10 megabytes per second, eg. no better than a normal 100BaseT network card?

    Read the article

  • HP Proliant DL160 G6 - Hardware RAID card to get? [closed]

    - by zhuanyi
    I have bought a DL160 G6 server (Product number: 490427R-001 ) and it does not come with a hardware RAID card. I am trying to set up a VMWare Esxi server and as such I would need a hardware RAID card. I am just wondering if there is any card that would fit into the chassis? Would a P200i fits? How about a P400? Also, would there be any non-HP RAID card that will do the magic too? I have 4 SATA 160GB hard drives already plugged in. Thanks a lot!

    Read the article

  • Using a Token-ring network card instead of a router (?)

    - by John
    I have cable, and the modem only has 1 network plug-in. They said I could buy my own router if I wanted to hook up two computers to it. I have an IBM Turbo 16/4 Token-Ring PC Card 2, which was in the laptop when I bought it, and the laptop also has the typical network plug (not a PC Card). Is there a way I could run the laptop as a server, and plug my desktop into the laptop, so they both have internet without my having to buy a router? (I realize routers are as cheap as $30.) Both computers run Windows XP Pro SP3. (I also have an 10/100 Etherjet Cardbus card (PC Card)). Thanks.

    Read the article

  • All USB ports on my laptop are dead, any options via Ethernet, SD/MMC or HDMI?

    - by carbontracking
    My son's laptop has taken alot of pain in his school over the last few months and he and his buddies have succeeded in breaking both USB ports. I've opened the box, unsoldered the USB ports, replaced them by new components but no joy - the ports seem dead. If I assume that the insertion of LEGO pieces, etc. in USB ports has rendered them unsalvageable, do I have any other options for restoring USB access to the laptop? The laptop has an ethernet port, a HDMI port and an SD/MMC port. I've trawled the web for a magic adadpter, i.e; ethernet=USB, HDMI=USB or SD/MMC=USB but to no avail. Lots of options for going the other way though. Does anyone have any ideas on the feasibility of an ethernet=USB cable? Ethernet doesn't seem to have +5V or GND so I can run a cable from the motherboard that could provide those. Amazing how many functions of a laptop just disappear when you have no USB ports.

    Read the article

  • Is it possible to power off a PCIe video card/slot? (eg. hot-plug)

    - by CR.
    I'm looking at building a system that supports VT-d so I can pass through a high powered loud video card to a Xen/KVM/whatever VM (host will be Linux based). However, when I'm not using the VM I want to turn the video card off so its fan does not run. The card will not be used when the VM isn't running. Anyone know if this is possible? The PCI-Express hot-plug specification allows cutting power to specific slots but I have never heard of anyone doing it with a video card and my searches for information have turned up nothing.

    Read the article

  • I dont understand how Westpac Payway API and NET works

    - by spirytus
    Been googling all day, reading numerous pdf's and still getting confused with the concepts of sending data to Payway system from Westpac (bank in Australia, link text). They offer access via API but also give access via what they call NET. The way I understand is that when client want to pay on my website, in case of NET, client gets to the page (hosted by a bank or hosted by me) where is provided with form to enter credit card info details. Then this form is submitted via normal POST call to Payway's specific https address. It is processed then and browser returns to url I specified as one of the parameters I sent in hidden field. In case of API story is similar, so user receives form, fills in the data and then data is send to my backend (not Payway's). My backend then calls payway API with data provided and once answer received returns confirmation page to the client. Is my understanding right? Please explain as I have a feeling I am missing something basic here.

    Read the article

  • Using NSWindow or NSPanel as "CardLayout"

    - by Leandro
    How guys. I'm not top dev in java, but what I`m really not is cocoa top dev :P I would like to have your assistance to produce a layout with cocoa and IB to work just like the CardLayout in Java. Do you have some idea of how to do it? Thanks for the attention! EDIT: CardLayout: A set of panels ("cards") are designed to compose a "deck of cards".It works like a queue of panels, in which only the first "card" is shown on the interface.I can easily interchange between cards if I want so to modify the interface to the user. I hope I could help you to help me. =)

    Read the article

  • Optimize css vs Google page speed is messing with me

    - by The Disintegrator
    I'm using google page speed and it's telling me my css is inefficient... Very inefficient rules (good to fix on any page): * table.fancy thead td Tag key with 2 descendant selectors and Class overly qualified with tag * table.fancy tfoot td Tag key with 2 descendant selectors and Class overly qualified with tag The css rules are table.fancy {border: 1px solid white; padding:5px} table.fancy td {background:#656165} table.fancy thead td, table.fancy tfoot td {background:#767276} I want the header and footer in a different background color than the body of the table (a data table) On what grounds this is inefficient? How to make it more efficient? I will not add a class to the thead and tfoot for googles's sake.

    Read the article

  • Parsing an RFC822-Datetime in .NETMF 4.0

    - by chris12892
    I have an application written in .NETMF that requires that I be able to parse an RFC822-Datetime. Normally, this would be easy, but NETMF does not have a DateTime.parse() method, nor does it have some sort of a pattern matching implementation, so I'm pretty much stuck. Any ideas? EDIT: "Intelligent" solutions are probably needed. Part of the reason this is difficult is that the datetime in question has a tendency to have extra spaces in it (but only sometimes). A simple substring solution might work one day, but fail the next when the datetime has an extra space somewhere between the parts. I do not have control over the datetime, it is from the NOAA.

    Read the article

  • Cost of exception handlers in Python

    - by Thilo
    In another question, the accepted answer suggested replacing a (very cheap) if statement in Python code with a try/except block to improve performance. Coding style issues aside, and assuming that the exception is never triggered, how much difference does it make (performance-wise) to have an exception handler, versus not having one, versus having a compare-to-zero if-statement?

    Read the article

  • Dynamics of the using keyword

    - by AngryHacker
    Consider the following code: // module level declaration Socket _client; void ProcessSocket() { _client = GetSocketFromSomewhere(); using (_client) { DoStuff(); // receive and send data Close(); } } void Close() { _client.Close(); _client = null; } Given that that the code calls the Close() method, which closes the _client socket and sets it to null, while still inside the `using' block, what exactly happens behind the scenes? Does the socket really get closed? Are there side effects? P.S. This is using C# 3.0 on the .NET MicroFramework, but I suppose the c#, the language, should function identically. The reason i am asking is that occasionally, very rarely, I run out of sockets (which is a very precious resource on a .NET MF devices).

    Read the article

  • Speed of CSS

    - by Ólafur Waage
    This is just a question to help me understand CSS rendering better. Lets say we have a million lines of this. <div class="first"> <div class="second"> <span class="third">Hello World</span> </div> </div> Which would be the fastest way to change the font of Hello World to red? .third { color: red; } div.third { color: red; } div.second div.third { color: red; } div.first div.second div.third { color: red; } Also, what if there was at tag in the middle that had a unique id of "foo". Which one of the CSS methods above would be the fastest. I know why these methods are used etc, im just trying to grasp better the rendering technique of the browsers and i have no idea how to make a test that times it. UPDATE: Nice answer Gumbo. From the looks of it then it would be quicker in a regular site to do the full definition of a tag. Since it finds the parents and narrows the search for every parent found. That could be bad in the sense you'd have a pretty large CSS file though.

    Read the article

  • Function-Local Static Const variable Initialization semantics.

    - by Hassan Syed
    The questions are in bold, for those that cannot be bothered reading a question in depth. This is a followup to this question. It is to do with the initialization semantics of static variables in functions. Static variables should be initialized once, and their internal state might be altered later - as I (currently) do in the linked question. However, the code in question does not require the feature to change the state of the variable later. Let me clarrify my position, since I don't require the string object's internal state to change. The code is for a trait class for meta programming, and as such would would benifit from a const char * const ptr -- thus Ideally a local cost static const variable is needed. My educated guess is that in this case the string in question will be optimally placed in memory by the link-loader, and that the code is more secure and maps to the intended semantics. This leads to the semantics of such a variable "The C++ Programming language Third Edition -- Stroustrup" does not have anything (that I could find) to say about this matter. All that is said is that the variable is initialized once when the flow of control of the thread first reaches the code. This leads me to ponder if the following code would be sensible, and if not what are the intended semantics ?. #include <iostream> const char * const GetString(const char * x_in) { static const char * const x = x_in; return x; } int main() { const char * const temp = GetString("yahoo"); std::cout << temp << std::endl; const char * const temp2 = GetString("yahoo2"); std::cout << temp2 << std::endl; } The following compiles on GCC and prints "yahoo" twice. Which is what I want -- However it might not be standards compliant (which is why I post this question). It might be more elegant to have two functions, "SetString" and "String" where the latter forwards to the first. If it is standards compliant does someone know of a templates implementation in boost (or elsewhere) ?

    Read the article

  • How can I make this Java code run faster?

    - by Martin Wiboe
    Hello all, I am trying to make a Java port of a simple feed-forward neural network. This obviously involves lots of numeric calculations, so I am trying to optimize my central loop as much as possible. The results should be correct within the limits of the float data type. My current code looks as follows (error handling & initialization removed): /** * Simple implementation of a feedforward neural network. The network supports * including a bias neuron with a constant output of 1.0 and weighted synapses * to hidden and output layers. * * @author Martin Wiboe */ public class FeedForwardNetwork { private final int outputNeurons; // No of neurons in output layer private final int inputNeurons; // No of neurons in input layer private int largestLayerNeurons; // No of neurons in largest layer private final int numberLayers; // No of layers private final int[] neuronCounts; // Neuron count in each layer, 0 is input // layer. private final float[][][] fWeights; // Weights between neurons. // fWeight[fromLayer][fromNeuron][toNeuron] // is the weight from fromNeuron in // fromLayer to toNeuron in layer // fromLayer+1. private float[][] neuronOutput; // Temporary storage of output from previous layer public float[] compute(float[] input) { // Copy input values to input layer output for (int i = 0; i < inputNeurons; i++) { neuronOutput[0][i] = input[i]; } // Loop through layers for (int layer = 1; layer < numberLayers; layer++) { // Loop over neurons in the layer and determine weighted input sum for (int neuron = 0; neuron < neuronCounts[layer]; neuron++) { // Bias neuron is the last neuron in the previous layer int biasNeuron = neuronCounts[layer - 1]; // Get weighted input from bias neuron - output is always 1.0 float activation = 1.0F * fWeights[layer - 1][biasNeuron][neuron]; // Get weighted inputs from rest of neurons in previous layer for (int inputNeuron = 0; inputNeuron < biasNeuron; inputNeuron++) { activation += neuronOutput[layer-1][inputNeuron] * fWeights[layer - 1][inputNeuron][neuron]; } // Store neuron output for next round of computation neuronOutput[layer][neuron] = sigmoid(activation); } } // Return output from network = output from last layer float[] result = new float[outputNeurons]; for (int i = 0; i < outputNeurons; i++) result[i] = neuronOutput[numberLayers - 1][i]; return result; } private final static float sigmoid(final float input) { return (float) (1.0F / (1.0F + Math.exp(-1.0F * input))); } } I am running the JVM with the -server option, and as of now my code is between 25% and 50% slower than similar C code. What can I do to improve this situation? Thank you, Martin Wiboe

    Read the article

  • Commercial use of Google API

    - by Mac
    This question maybe a better fit on The Business of Software forum but despite my post on there, I'm still unable to determine the following: Can I use the Google API to build commercial software? If not how can the people behind Byline charge for their app? Update: I'm specifically interested in Google's Picasa & Reader APIs

    Read the article

  • What languages should a microISV use to write commercial software?

    - by Wal
    I've been writing software in Java for many years now, but it was always for internal applications that would be deployed to a server. I'd like to get into writing desktop applications now but I don't know where to start. I've written a few Java/Swing applications but again they were for internal use. My understanding is that Java and other semi-compiled and interpreted languages are too easy to reverse engineer, making them unsuitable for commercial software. I am aware that there are compilers for Java and some other interpreted language, but I've also heard that they are pricey and/or unreliable. Assuming I start a microISV and wish to develop and sell applications to a broad audience, what's my best bet? I would prefer something that can be written close to once, and compiled for different operating systems but I am not opposed to .NET and a Windows-only audience if other languages would compromise the experience (installation ease & user experience) in Windows. My only issue there is that I don't have a large starting budget and paying out the wazoo for the required development tools is not really in the cards.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >