Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 205/1115 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Static background noise while using new headset Ubuntu 13.04

    - by ThundLayr
    Today I bought a new gaming headset (Gx-Gaming Lychas), and when I tried to record some gameplay-comentary I noticed that there always is a static background noise, I just recorded an example so you guys can listen it (no downloaded needed): http://www47.zippyshare.com/v/65167832/file.html I'm using Kubuntu 13.04 and Kernel version is 3.8.0-19, my laptop is an Acer Travelmate 5760Z, I tried tons of configurations on Alsamixer and none of them made result, I really need to get this working so any kind of help will be very aprecciated. cat /proc/asound/cards: 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xc6400000 irq 44 cat /proc/asound/card0/codec#0 Codec: Conexant CX20588 Address: 0 AFG Function Id: 0x1 (unsol 1) Vendor Id: 0x14f1506c Subsystem Id: 0x10250574 Revision Id: 0x100003 No Modem Function Group found Default PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Default Amp-In caps: N/A Default Amp-Out caps: N/A State of AFG node 0x01: Power states: D0 D1 D2 D3 D3cold CLKSTOP EPSS Power: setting=D0, actual=D0 GPIO: io=4, o=0, i=0, unsolicited=1, wake=0 IO[0]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[1]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[2]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[3]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 Node 0x10 [Audio Output] wcaps 0xc1d: Stereo Amp-Out R/L Control: name="Headphone Playback Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Control: name="Headphone Playback Switch", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Device: name="CX20588 Analog", type="Audio", device=0 Amp-Out caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-Out vals: [0x4a 0x4a] Converter: stream=8, channel=0 PCM: rates [0x560]: 44100 48000 96000 192000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x11 [Audio Output] wcaps 0xc1d: Stereo Amp-Out R/L Control: name="Speaker Playback Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Control: name="Speaker Playback Switch", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-Out vals: [0x80 0x80] Converter: stream=8, channel=0 PCM: rates [0x560]: 44100 48000 96000 192000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x12 [Audio Output] wcaps 0x611: Stereo Digital Converter: stream=0, channel=0 Digital: Digital category: 0x0 IEC Coding Type: 0x0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x5]: PCM AC3 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x13 [Beep Generator Widget] wcaps 0x70000c: Mono Amp-Out Control: name="Beep Playback Volume", index=0, device=0 ControlAmp: chs=1, dir=Out, idx=0, ofs=0 Control: name="Beep Playback Switch", index=0, device=0 ControlAmp: chs=1, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x07, nsteps=0x07, stepsize=0x0f, mute=0 Amp-Out vals: [0x00] Node 0x14 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Control: name="Capture Volume", index=0, device=0 ControlAmp: chs=3, dir=In, idx=0, ofs=0 Control: name="Capture Switch", index=0, device=0 ControlAmp: chs=3, dir=In, idx=0, ofs=0 Device: name="CX20588 Analog", type="Audio", device=0 Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x50 0x50] [0x80 0x80] [0x80 0x80] [0x80 0x80] Converter: stream=4, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x15 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] Converter: stream=0, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x16 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] Converter: stream=0, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x17 [Audio Selector] wcaps 0x30050d: Stereo Amp-Out Control: name="Mic Boost Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x00, nsteps=0x04, stepsize=0x27, mute=0 Amp-Out vals: [0x04 0x04] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x1a 0x1b* 0x1d 0x1e Node 0x18 [Audio Selector] wcaps 0x30050d: Stereo Amp-Out Amp-Out caps: ofs=0x00, nsteps=0x04, stepsize=0x27, mute=0 Amp-Out vals: [0x00 0x00] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x1a* 0x1b 0x1d 0x1e Node 0x19 [Pin Complex] wcaps 0x400581: Stereo Control: name="Headphone Jack", index=0, device=0 Pincap 0x0000001c: OUT HP Detect Pin Default 0x04214040: [Jack] HP Out at Ext Right Conn = 1/8, Color = Green DefAssociation = 0x4, Sequence = 0x0 Pin-ctls: 0xc0: OUT HP Unsolicited: tag=01, enabled=1 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1a [Pin Complex] wcaps 0x400481: Stereo Control: name="Internal Mic Phantom Jack", index=0, device=0 Pincap 0x00001324: IN Detect Vref caps: HIZ 50 80 Pin Default 0x90a70130: [Fixed] Mic at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0x3, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x24: IN VREF_80 Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x1b [Pin Complex] wcaps 0x400581: Stereo Control: name="Mic Jack", index=0, device=0 Pincap 0x00011334: IN OUT EAPD Detect Vref caps: HIZ 50 80 EAPD 0x0: Pin Default 0x04a19020: [Jack] Mic at Ext Right Conn = 1/8, Color = Pink DefAssociation = 0x2, Sequence = 0x0 Pin-ctls: 0x24: IN VREF_80 Unsolicited: tag=02, enabled=1 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1c [Pin Complex] wcaps 0x400581: Stereo Pincap 0x00000014: OUT Detect Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1d [Pin Complex] wcaps 0x400581: Stereo Pincap 0x00010034: IN OUT EAPD Detect EAPD 0x0: Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1e [Pin Complex] wcaps 0x400481: Stereo Pincap 0x00000024: IN Detect Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Control: name="Speaker Phantom Jack", index=0, device=0 Pincap 0x00000010: OUT Pin Default 0x92170110: [Fixed] Speaker at Int Front Conn = Analog, Color = Unknown DefAssociation = 0x1, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10 0x11* Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 1 0x12 Node 0x21 [Audio Output] wcaps 0x611: Stereo Digital Converter: stream=0, channel=0 Digital: Digital category: 0x0 IEC Coding Type: 0x0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x5]: PCM AC3 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x22 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 1 0x21 Node 0x23 [Pin Complex] wcaps 0x40040b: Stereo Amp-In Amp-In caps: ofs=0x00, nsteps=0x04, stepsize=0x2f, mute=0 Amp-In vals: [0x00 0x00] Pincap 0x00000020: IN Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x24 [Audio Mixer] wcaps 0x20050b: Stereo Amp-In Amp-In caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-In vals: [0x00 0x00] [0x00 0x00] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10 0x11 Node 0x25 [Vendor Defined Widget] wcaps 0xf00000: Mono

    Read the article

  • Why doesn't my implementation of El Gamal work for long text strings?

    - by angstrom91
    I'm playing with the El Gamal cryptosystem, and my goal is to be able to encipher and decipher long sequences of text. I have come up with a method that works for short sequences, but does not work for long sequences, and I cannot figure out why. El Gamal requires the plaintext to be an integer. I have turned my string into a byte[] using the .getBytes() method for Strings, and then created a BigInteger out of the byte[]. After encryption/decryption, I turn the BigInteger into a byte[] using the .toByteArray() method for BigIntegers, and then create a new String object from the byte[]. This works perfectly when i call ElGamalEncipher with strings up to 129 characters. With 130 or more characters, the output produced is garbled. Can someone suggest how to solve this issue? Is this an issue with my method of turning the string into a BigInteger? If so, is there a better way to turn my string of text into a BigInteger and back? Below is my encipher/decipher code with a program to demonstrate the problem. import java.math.BigInteger; public class Main { static BigInteger P = new BigInteger("15893293927989454301918026303382412" + "2586402937727056707057089173871237566896685250125642378268385842" + "6917261652781627945428519810052550093673226849059197769795219973" + "9423619267147615314847625134014485225178547696778149706043781174" + "2873134844164791938367765407368476144402513720666965545242487520" + "288928241768306844169"); static BigInteger G = new BigInteger("33234037774370419907086775226926852" + "1714093595439329931523707339920987838600777935381196897157489391" + "8360683761941170467795379762509619438720072694104701372808513985" + "2267495266642743136795903226571831274837537691982486936010899433" + "1742996138863988537349011363534657200181054004755211807985189183" + "22832092343085067869"); static BigInteger R = new BigInteger("72294619754760174015019300613282868" + "7219874058383991405961870844510501809885568825032608592198728334" + "7842806755320938980653857292210955880919036195738252708294945320" + "3969657021169134916999794791553544054426668823852291733234236693" + "4178738081619274342922698767296233937873073756955509269717272907" + "8566607940937442517"); static BigInteger A = new BigInteger("32189274574111378750865973746687106" + "3695160924347574569923113893643975328118502246784387874381928804" + "6865920942258286938666201264395694101012858796521485171319748255" + "4630425677084511454641229993833255506759834486100188932905136959" + "7287419551379203001848457730376230681693887924162381650252270090" + "28296990388507680954"); public static void main(String[] args) { FewChars(); System.out.println(); ManyChars(); } public static void FewChars() { //ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) BigInteger[] cipherText = ElGamal.ElGamalEncipher("This is a string " + "of 129 characters which works just fine . This is a string " + "of 129 characters which works just fine . This is a s", P, G, R); System.out.println("This is a string of 129 characters which works " + "just fine . This is a string of 129 characters which works " + "just fine . This is a s"); //ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) String output = ElGamal.ElGamalDecipher(cipherText[0], cipherText[1], A, P); System.out.println("The decrypted text is: " + output); } public static void ManyChars() { //ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) BigInteger[] cipherText = ElGamal.ElGamalEncipher("This is a string " + "of 130 characters which doesn’t work! This is a string of " + "130 characters which doesn’t work! This is a string of ", P, G, R); System.out.println("This is a string of 130 characters which doesn’t " + "work! This is a string of 130 characters which doesn’t work!" + " This is a string of "); //ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) String output = ElGamal.ElGamalDecipher(cipherText[0], cipherText[1], A, P); System.out.println("The decrypted text is: " + output); } } import java.math.BigInteger; import java.security.SecureRandom; public class ElGamal { public static BigInteger[] ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) { // returns a BigInteger[] cipherText // cipherText[0] is c // cipherText[1] is d SecureRandom sr = new SecureRandom(); BigInteger[] cipherText = new BigInteger[2]; BigInteger pText = new BigInteger(plaintext.getBytes()); // 1: select a random integer k such that 1 <= k <= p-2 BigInteger k = new BigInteger(p.bitLength() - 2, sr); // 2: Compute c = g^k(mod p) BigInteger c = g.modPow(k, p); // 3: Compute d= P*r^k = P(g^a)^k(mod p) BigInteger d = pText.multiply(r.modPow(k, p)).mod(p); // C =(c,d) is the ciphertext cipherText[0] = c; cipherText[1] = d; return cipherText; } public static String ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) { //returns the plaintext enciphered as (c,d) // 1: use the private key a to compute the least non-negative residue // of an inverse of (c^a)' (mod p) BigInteger z = c.modPow(a, p).modInverse(p); BigInteger P = z.multiply(d).mod(p); byte[] plainTextArray = P.toByteArray(); return new String(plainTextArray); } }

    Read the article

  • Parent Objects

    - by Ali Bahrami
    Support for Parent Objects was added in Solaris 11 Update 1. The following material is adapted from the PSARC arc case, and the Solaris Linker and Libraries Manual. A "plugin" is a shared object, usually loaded via dlopen(), that is used by a program in order to allow the end user to add functionality to the program. Examples of plugins include those used by web browsers (flash, acrobat, etc), as well as mdb and elfedit modules. The object that loads the plugin at runtime is called the "parent object". Unlike most object dependencies, the parent is not identified by name, but by its status as the object doing the load. Historically, building a good plugin is has been more complicated than it should be: A parent and its plugin usually share a 2-way dependency: The plugin provides one or more routines for the parent to call, and the parent supplies support routines for use by the plugin for things like memory allocation and error reporting. It is a best practice to build all objects, including plugins, with the -z defs option, in order to ensure that the object specifies all of its dependencies, and is self contained. However: The parent is usually an executable, which cannot be linked to via the usual library mechanisms provided by the link editor. Even if the parent is a shared object, which could be a normal library dependency to the plugin, it may be desirable to build plugins that can be used by more than one parent, in which case embedding a dependency NEEDED entry for one of the parents is undesirable. The usual way to build a high quality plugin with -z defs uses a special mapfile provided by the parent. This mapfile defines the parent routines, specifying the PARENT attribute (see example below). This works, but is inconvenient, and error prone. The symbol table in the parent already describes what it makes available to plugins — ideally the plugin would obtain that information directly rather than from a separate mapfile. The new -z parent option to ld allows a plugin to link to the parent and access the parent symbol table. This differs from a typical dependency: No NEEDED record is created. The relationship is recorded as a logical connection to the parent, rather than as an explicit object name However, it operates in the same manner as any other dependency in terms of making symbols available to the plugin. When the -z parent option is used, the link-editor records the basename of the parent object in the dynamic section, using the new tag DT_SUNW_PARENT. This is an informational tag, which is not used by the runtime linker to locate the parent, but which is available for diagnostic purposes. The ld(1) manpage documentation for the -z parent option is: -z parent=object Specifies a "parent object", which can be an executable or shared object, against which to link the output object. This option is typically used when creating "plugin" shared objects intended to be loaded by an executable at runtime via the dlopen() function. The symbol table from the parent object is used to satisfy references from the plugin object. The use of the -z parent option makes symbols from the object calling dlopen() available to the plugin. Example For this example, we use a main program, and a plugin. The parent provides a function named parent_callback() for the plugin to call. The plugin provides a function named plugin_func() to the parent: % cat main.c #include <stdio.h> #include <dlfcn.h> #include <link.h> void parent_callback(void) { printf("plugin_func() has called parent_callback()\n"); } int main(int argc, char **argv) { typedef void plugin_func_t(void); void *hdl; plugin_func_t *plugin_func; if (argc != 2) { fprintf(stderr, "usage: main plugin\n"); return (1); } if ((hdl = dlopen(argv[1], RTLD_LAZY)) == NULL) { fprintf(stderr, "unable to load plugin: %s\n", dlerror()); return (1); } plugin_func = (plugin_func_t *) dlsym(hdl, "plugin_func"); if (plugin_func == NULL) { fprintf(stderr, "unable to find plugin_func: %s\n", dlerror()); return (1); } (*plugin_func)(); return (0); } % cat plugin.c #include <stdio.h> extern void parent_callback(void); void plugin_func(void) { printf("parent has called plugin_func() from plugin.so\n"); parent_callback(); } Building this in the traditional manner, without -zdefs: % cc -o main main.c % cc -G -o plugin.so plugin.c % ./main ./plugin.so parent has called plugin_func() from plugin.so plugin_func() has called parent_callback() As noted above, when building any shared object, the -z defs option is recommended, in order to ensure that the object is self contained and specifies all of its dependencies. However, the use of -z defs prevents the plugin object from linking due to the unsatisfied symbol from the parent object: % cc -zdefs -G -o plugin.so plugin.c Undefined first referenced symbol in file parent_callback plugin.o ld: fatal: symbol referencing errors. No output written to plugin.so A mapfile can be used to specify to ld that the parent_callback symbol is supplied by the parent object. % cat plugin.mapfile $mapfile_version 2 SYMBOL_SCOPE { global: parent_callback { FLAGS = PARENT }; }; % cc -zdefs -Mplugin.mapfile -G -o plugin.so plugin.c However, the -z parent option to ld is the most direct solution to this problem, allowing the plugin to actually link against the parent object, and obtain the available symbols from it. An added benefit of using -z parent instead of a mapfile, is that the name of the parent object is recorded in the dynamic section of the plugin, and can be displayed by the file utility: % cc -zdefs -zparent=main -G -o plugin.so plugin.c % elfdump -d plugin.so | grep PARENT [0] SUNW_PARENT 0xcc main % file plugin.so plugin.so: ELF 32-bit LSB dynamic lib 80386 Version 1, parent main, dynamically linked, not stripped % ./main ./plugin.so parent has called plugin_func() from plugin.so plugin_func() has called parent_callback() We can also observe this in elfedit plugins on Solaris systems running Solaris 11 Update 1 or newer: % file /usr/lib/elfedit/dyn.so /usr/lib/elfedit/dyn.so: ELF 32-bit LSB dynamic lib 80386 Version 1, parent elfedit, dynamically linked, not stripped, no debugging information available Related Other Work The GNU ld has an option named --just-symbols that can be used in a similar manner: --just-symbols=filename Read symbol names and their addresses from filename, but do not relocate it or include it in the output. This allows your output file to refer symbolically to absolute locations of memory defined in other programs. You may use this option more than once. -z parent is a higher level operation aimed specifically at simplifying the construction of high quality plugins. Although it employs the same operation, it differs from --just symbols in 2 significant ways: There can only be one parent. The parent is recorded in the created object, and can be displayed by 'file', or other similar tools.

    Read the article

  • How to check if a Webcam is broken?

    - by user84812
    I just bought an Acer Aspire 3830TG, it comes with an integrated 1.3M HD Webcam. Before buying it i tried with a bootable Lubuntu usb stick, everything worked well except for the webcam, which i thought I had to tweak. The thing is that it seems the camera should work with no problems in ubuntu. The driver is detected, I try dmesg | grep uvcvideo and the output is [ 12.226174] uvcvideo: Found UVC 1.00 device 1.3M HD WebCam (058f:b002) [ 12.245553] usbcore: registered new interface driver uvcvideo I've also tried using different software (guvcview is black when camera output is MJPG and turns to funny colors when YU12 or YV12, cheese is always black, camorama is always with funny colors...). I should have checked that it was working properly with the default os (windows) but now it's too late for that. I even booted with a official Ubuntu Quantal distro from the usb pen, and the results are the same. so, my question is: is there any way to check that the camera is righmt or broken? So, if it's broken, at least i can go to the shop, show them that it's really broken and get an external webcam for free, or st like that. cheers. UPDATE 1 Thanks, jrp. I run sudo lsinput, and the output info about my video is the following: /dev/input/event6 bustype : BUS_USB vendor : 0x58f product : 0xb002 version : 2 name : "1.3M HD WebCam" phys : "usb-0000:00:1a.0-1.3/button" bits ev : EV_SYN EV_KEY /dev/input/event7 bustype : BUS_HOST vendor : 0x0 product : 0x6 version : 0 name : "Video Bus" phys : "LNXVIDEO/video/input0" bits ev : EV_SYN EV_KEY With this info, i'm not pretty sure about running the luvcview command. If I run luvcview -d /dev/video0 -L, the output is the following: SDL information: Video driver: x11 A window manager is available Device information: Device path: /dev/video0 { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/7, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/7, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/7, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/7, 1/5, { pixelformat = 'MJPG', description = 'MJPEG' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, { pixelformat = 'RGB3', description = 'RGB3' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, { pixelformat = 'BGR3', description = 'BGR3' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, { pixelformat = 'YU12', description = 'YU12' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, { pixelformat = 'YV12', description = 'YV12' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, if i run luvcview by itself, the image is funny (blue and red colors, mainly, with myself in negative state). tx

    Read the article

  • Memory Leaks - Objective-C

    - by reising1
    Can anyone help point out memory leaks? I'm getting a bunch within this method and I'm not sure exactly how to fix it. - (NSMutableArray *)getTop5AndOtherKeysAndValuesFromDictionary:(NSMutableDictionary *)dict { NSLog(@"get top 5"); int sumOfAllValues = 0; NSMutableArray *arr = [[[NSMutableArray alloc] init] retain]; for(NSString *key in dict){ NSString *value = [[dict objectForKey:key] retain]; [arr addObject:value]; sumOfAllValues += [value intValue]; } //sort values NSArray *sorted = [[arr sortedArrayUsingFunction:sort context:NULL] retain]; [arr release]; //top 5 values int sumOfTop5 = 0; NSMutableArray *top5 = [[[NSMutableArray alloc] init] retain]; for(int i = 0; i < 5; i++) { int proposedIndex = [sorted count] - 1 - i; if(proposedIndex >= 0) { [top5 addObject:[sorted objectAtIndex:([sorted count] - i - 1)]]; sumOfTop5 += [[sorted objectAtIndex:([sorted count] - i - 1)] intValue]; } } [sorted release]; //copy of all keys NSMutableArray *copyOfKeys = [[[NSMutableArray alloc] init] retain]; for(NSString *key in dict) { [copyOfKeys addObject:key]; } //copy of top 5 values NSMutableArray *copyOfTop5 = [[[NSMutableArray alloc] init] retain]; for(int i = 0; i < [top5 count]; i++) { [copyOfTop5 addObject:[top5 objectAtIndex:i]]; } //get keys with top 5 values NSMutableArray *outputKeys = [[[NSMutableArray alloc] init] retain]; for(int i = 0; i < [top5 count]; i++) { NSString *targetValue = [top5 objectAtIndex:i]; for(int j = 0; j < [copyOfKeys count]; j++) { NSString *key = [copyOfKeys objectAtIndex:j]; NSString *val = [dict objectForKey:key]; if([val isEqualToString:targetValue]) { [outputKeys addObject:key]; [copyOfKeys removeObjectAtIndex:j]; break; } } } [outputKeys addObject:@"Other"]; [top5 addObject:[[NSString stringWithFormat:@"%d",(sumOfAllValues - sumOfTop5)] retain]]; NSMutableArray *output = [[NSMutableArray alloc] init]; [output addObject:outputKeys]; [output addObject:top5]; NSMutableArray *percents = [[NSMutableArray alloc] init]; int sum = sumOfAllValues; float leftOverSum = sum * 1.0f; int count = [top5 count]; float val1, val2, val3, val4, val5; if(count >= 1) val1 = ([[top5 objectAtIndex:0] intValue] * 1.0f)/sum; else val1 = 0.0f; if(count >=2) val2 = ([[top5 objectAtIndex:1] intValue] * 1.0f)/sum; else val2 = 0.0f; if(count >= 3) val3 = ([[top5 objectAtIndex:2] intValue] * 1.0f)/sum; else val3 = 0.0f; if(count >= 4) val4 = ([[top5 objectAtIndex:3] intValue] * 1.0f)/sum; else val4 = 0.0f; if(count >=5) val5 = ([[top5 objectAtIndex:4] intValue] * 1.0f)/sum; else val5 = 0.0f; if(val1 >= .00001f) { NSMutableArray *a1 = [[NSMutableArray alloc] init]; [a1 addObject:[outputKeys objectAtIndex:0]]; [a1 addObject:[top5 objectAtIndex:0]]; [a1 addObject:[NSString stringWithFormat:@"%.01f",(val1*100)]]; [percents addObject:a1]; leftOverSum -= ([[top5 objectAtIndex:0] intValue] * 1.0f); } if(val2 >= .00001f) { NSMutableArray *a2 = [[NSMutableArray alloc] init]; [a2 addObject:[outputKeys objectAtIndex:1]]; [a2 addObject:[top5 objectAtIndex:1]]; [a2 addObject:[NSString stringWithFormat:@"%.01f",(val2*100)]]; [percents addObject:a2]; leftOverSum -= ([[top5 objectAtIndex:1] intValue] * 1.0f); } if(val3 >= .00001f) { NSMutableArray *a3 = [[NSMutableArray alloc] init]; [a3 addObject:[outputKeys objectAtIndex:2]]; [a3 addObject:[top5 objectAtIndex:2]]; [a3 addObject:[NSString stringWithFormat:@"%.01f",(val3*100)]]; [percents addObject:a3]; leftOverSum -= ([[top5 objectAtIndex:2] intValue] * 1.0f); } if(val4 >= .00001f) { NSMutableArray *a4 = [[NSMutableArray alloc] init]; [a4 addObject:[outputKeys objectAtIndex:3]]; [a4 addObject:[top5 objectAtIndex:3]]; [a4 addObject:[NSString stringWithFormat:@"%.01f",(val4*100)]]; [percents addObject:a4]; leftOverSum -= ([[top5 objectAtIndex:3] intValue] * 1.0f); } if(val5 >= .00001f) { NSMutableArray *a5 = [[NSMutableArray alloc] init]; [a5 addObject:[outputKeys objectAtIndex:4]]; [a5 addObject:[top5 objectAtIndex:4]]; [a5 addObject:[NSString stringWithFormat:@"%.01f",(val5*100)]]; [percents addObject:a5]; leftOverSum -= ([[top5 objectAtIndex:4] intValue] * 1.0f); } float valOther = (leftOverSum/sum); if(valOther >= .00001f) { NSMutableArray *a6 = [[NSMutableArray alloc] init]; [a6 addObject:[outputKeys objectAtIndex:5]]; [a6 addObject:[top5 objectAtIndex:5]]; [a6 addObject:[NSString stringWithFormat:@"%.01f",(valOther*100)]]; [percents addObject:a6]; } [output addObject:percents]; NSLog(@"mu - a"); //[arr release]; NSLog(@"mu - b"); //[copyOfKeys release]; NSLog(@"mu - c"); //[copyOfTop5 release]; NSLog(@"mu - c"); //[outputKeys release]; //[top5 release]; //[percents release]; return output; }

    Read the article

  • NSStream sockets missing data

    - by Chris T.
    I am trying to pull some sample data from FreeDB as a proof of concept, but I am having a tough time retrieving all of the data off the incoming stream (I am only getting the last bits for the final query listed here (if handshakeCode = 3) I think this may be something with the threading on the main runloop, but I am not sure. Odd thing is when the buffer size is larger than 1-2 bytes (which works as expected), I seem to be losing access to the data programmatically (the totalOutput variable on the first set of data is incomplete). I set up a packet capture, and it looks like those 1024 bytes are coming across the wire, but the app just isn't working with it. It looks like the next event is coming through and basically taking over. I tried using an NSLock to no avail as well. If I drop the buffer size down to 1 or 2, things seem to be reading just fine. This is probably obvious to someone who does this all the time, but this is my first foray into this with something I am familiar with, technology wise in other languages / platforms. The following code will show you what is happening. Run with the buffer set to 1024, and you will see a short final string, but once you set it to 1, you will see the amount of data I was expecting (I was even expecting it to be split, so that's not a big worry) #import <Foundation/Foundation.h> #import <Cocoa/Cocoa.h> //STACK OVERFLOW CODE: @interface stackoverflow : NSObject <NSStreamDelegate> { NSInputStream *iStream; NSOutputStream *oStream; int handshakeCode; NSString *selectedDiscId; NSString *selectedGenre; } -(void)getMatchesFromFreeDB; -(void)sendToOutputStream:(NSString*)command; @end @implementation stackoverflow -(void)getMatchesFromFreeDB { NSHost *host = [NSHost hostWithName:@"freedb.freedb.org"]; [NSStream getStreamsToHost:host port:8880 inputStream:&iStream outputStream:&oStream]; [iStream retain]; [oStream retain]; [iStream setDelegate:self]; [oStream setDelegate:self]; [iStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [oStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [iStream open]; [oStream open]; handshakeCode = 0; //not done any processing } -(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode { switch(eventCode) { case NSStreamEventOpenCompleted: { NSLog(@"Stream open completed"); break; } case NSStreamEventHasBytesAvailable: { NSLog(@"Stream has bytes available"); if (aStream == iStream) { NSMutableString *totalOutput = [NSMutableString stringWithString:@""]; //read data uint8_t buffer[1024]; int len; while ([iStream hasBytesAvailable]) { len = [iStream read:buffer maxLength:sizeof(buffer)]; if (len 0) { NSString *output = [[NSString alloc] initWithBytes:buffer length:len encoding:NSUTF8StringEncoding]; //this could have also been put into an NSData object if (nil != output) { //append to the total output [totalOutput appendString:output]; } } } NSLog(@"OUTPUT , %i:\n\n%@", [totalOutput lengthOfBytesUsingEncoding:NSUTF8StringEncoding], totalOutput); NSArray *outputComponents = [totalOutput componentsSeparatedByString:@" "]; //Attempt to get handshake code, since we haven't done it yet: if (handshakeCode == 1) { //we are just getting the sign-on banner: //let's move on: handshakeCode = 2; } else if (handshakeCode == 2) { handshakeCode = [[outputComponents objectAtIndex:0] intValue]; if (handshakeCode == 200) { NSLog(@"---Handshake OK %i", handshakeCode); NSMutableString *query = [NSMutableString stringWithString:@"cddb query f3114b11 17 225 19915 36489 54850 69425 87025 103948 123242 136075 152817 178335 192850 211677 235104 262090 284882 308658 4430\n"]; handshakeCode = 3; [self sendToOutputStream:query]; } } else if (handshakeCode == 3) { //now, we are reading out the matches: if ([[outputComponents objectAtIndex:0] intValue] == 200) //found exact match: { NSLog(@"Found exact match"); selectedGenre = [outputComponents objectAtIndex:1] ; selectedDiscId = [outputComponents objectAtIndex:2]; if (selectedGenre && selectedDiscId) { //send off the request to get the entry: NSString *query = [NSString stringWithFormat:@"cddb read %@ %@\n", selectedGenre, selectedDiscId]; [self sendToOutputStream:query]; handshakeCode = 4; } } } } break; } case NSStreamEventEndEncountered: { NSLog(@"Stream event end encountered"); break; } case NSStreamEventErrorOccurred: { NSLog(@"Stream error occurred"); break; } case NSStreamEventHasSpaceAvailable: { NSLog(@"Stream has space available"); if (aStream == oStream) { if (handshakeCode == 0) { handshakeCode = 1; [self sendToOutputStream:@"cddb hello stackoverflow localhost.localdomain test .01BETA\n"]; } } break; } } } -(void)sendToOutputStream:(NSString*)command { const uint8_t *rawCommand = (const uint8_t *)[command UTF8String]; [oStream write:rawCommand maxLength:strlen(rawCommand)]; NSLog(@"Sent command: %@",command); } @end int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; stackoverflow *test = [[stackoverflow alloc] init]; [test getMatchesFromFreeDB]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; [runLoop run]; [pool drain]; return 0; } Any help is much appreciated! Thanks

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • When building a web Application project, TFS 2008 Builds two spearate projects in the _PublishedFold

    - by Steve Johnson
    Hi all, I am trying to a perform build automation on one of web application projects built using VS 2008. The _PublishedWebSites contains two folders: Web and Deploy. I just want the TFS 2008 to generate only the Deploy Folder and Not the Web Folder. Here is my TFSBuild.proj File <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets" /> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|AnyCPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> <!--<ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|x64"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x64</PlatformToBuild> </ConfigurationToBuild> </ItemGroup>--> <ItemGroup> <AdditionalReferencePath Include="C:\3PR" /> </ItemGroup> <Target Name="GetCopyToOutputDirectoryItems" Outputs="@(AllItemsFullPathWithTargetPath)" DependsOnTargets="AssignTargetPaths;_SplitProjectReferencesByFileExistence"> <!-- Get items from child projects first. --> <MSBuild Projects="@(_MSBuildProjectReferenceExistent)" Targets="GetCopyToOutputDirectoryItems" Properties="%(_MSBuildProjectReferenceExistent.SetConfiguration); %(_MSBuildProjectReferenceExistent.SetPlatform)" Condition="'@(_MSBuildProjectReferenceExistent)'!=''"> <Output TaskParameter="TargetOutputs" ItemName="_AllChildProjectItemsWithTargetPathNotFiltered"/> </MSBuild> <!-- Remove duplicates. --> <RemoveDuplicates Inputs="@(_AllChildProjectItemsWithTargetPathNotFiltered)"> <Output TaskParameter="Filtered" ItemName="_AllChildProjectItemsWithTargetPath"/> </RemoveDuplicates> <!-- Target outputs must be full paths because they will be consumed by a different project. --> <CreateItem Include="@(_AllChildProjectItemsWithTargetPath->'%(FullPath)')" Exclude= "$(BuildProjectFolderPath)/../../Development/Main/Web/Bin*.pdb; *.refresh; *.vshost.exe; *.manifest; *.compiled; $(BuildProjectFolderPath)/../../Development/Main/Web/Auth/MySoftware.dll; $(BuildProjectFolderPath)/../../Development/Main/Web/BinApp_Web_*.dll;" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'" > <Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always'"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/> </CreateItem> </Target> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.WebDeployment.targets. <Target Name="BeforeBuild"> </Target> <Target Name="BeforeMerge"> </Target> <Target Name="AfterMerge"> </Target> <Target Name="AfterBuild"> </Target> --> </Project> I want to build everything that the builtin Deploy project is doing for me. But i dont want the generated Web Project as it conatains App_Web_xxxx.dll assemblies instead of a single compiled assembly. Please help. Thanks

    Read the article

  • When building a web application project, TFS 2008 builds two separate projects in _PublishedFolder.

    - by Steve Johnson
    I am trying to perform build automation on one of my web application projects built using VS 2008. The _PublishedWebSites contains two folders: Web and Deploy. I want TFS 2008 to generate only the deploy folder and not the web folder. Here is my TFSBuild.proj file: <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets" /> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|AnyCPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> <!--<ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|x64"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x64</PlatformToBuild> </ConfigurationToBuild> </ItemGroup>--> <ItemGroup> <AdditionalReferencePath Include="C:\3PR" /> </ItemGroup> <Target Name="GetCopyToOutputDirectoryItems" Outputs="@(AllItemsFullPathWithTargetPath)" DependsOnTargets="AssignTargetPaths;_SplitProjectReferencesByFileExistence"> <!-- Get items from child projects first. --> <MSBuild Projects="@(_MSBuildProjectReferenceExistent)" Targets="GetCopyToOutputDirectoryItems" Properties="%(_MSBuildProjectReferenceExistent.SetConfiguration); %(_MSBuildProjectReferenceExistent.SetPlatform)" Condition="'@(_MSBuildProjectReferenceExistent)'!=''"> <Output TaskParameter="TargetOutputs" ItemName="_AllChildProjectItemsWithTargetPathNotFiltered"/> </MSBuild> <!-- Remove duplicates. --> <RemoveDuplicates Inputs="@(_AllChildProjectItemsWithTargetPathNotFiltered)"> <Output TaskParameter="Filtered" ItemName="_AllChildProjectItemsWithTargetPath"/> </RemoveDuplicates> <!-- Target outputs must be full paths because they will be consumed by a different project. --> <CreateItem Include="@(_AllChildProjectItemsWithTargetPath->'%(FullPath)')" Exclude= "$(BuildProjectFolderPath)/../../Development/Main/Web/Bin*.pdb; *.refresh; *.vshost.exe; *.manifest; *.compiled; $(BuildProjectFolderPath)/../../Development/Main/Web/Auth/MySoftware.dll; $(BuildProjectFolderPath)/../../Development/Main/Web/BinApp_Web_*.dll;" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'" > <Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always'"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/> </CreateItem> </Target> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.WebDeployment.targets. <Target Name="BeforeBuild"> </Target> <Target Name="BeforeMerge"> </Target> <Target Name="AfterMerge"> </Target> <Target Name="AfterBuild"> </Target> --> </Project> I want to build everything that the builtin Deploy project is doing for me. But I don't want the generated web project as it contains App_Web_xxxx.dll assemblies instead of a single compiled assembly. How can I do this?

    Read the article

  • PHP Form checkbox question

    - by Sef
    Hello, I have a form that takes the following inputs: Name: IBM Surface(in m^2): 9 Floor: (Checkbox1) Phone: (Checkbox2) Network: (Checkbox3) Button to send to a next php page. All those values above are represented in a table when i press the submit button. The first two (name and surname) are properly displayed in the table. The problem is with the checkboxes. If i select the first checkbox the value in the table should be presented with 1. If its not selected the value in the table should be empty. echo "<td>$Name</td>"; // works properly echo "<td>$Surface</td>"; // works properly echo "<td>....no idea for the checkboxes</td>; Some part of my php code with the variables: <?php if (!empty($_POST)) { $standnaam = $_POST["name"]; $oppervlakte = $_POST["surface"]; $verdieping = $_POST["floor"]; $telefoon = $_POST["telefoon"]; $netwerk = $_POST["netwerk"]; if (is_numeric($surface)) { $_SESSION["name"]=$name; $_SESSION["surface"]=$surface; header("Location:ExpoOverzicht.php"); } else { echo "<h1>Wrong input, Pleasee fill in again</h1>"; } if(!empty($floor) && ($phone) && ($network)) { $_SESSION["floor"]=$floor; $_SESSION["phone"]=$phone; $_SESSION["network"]=$network; header("Location:ExpoOverzicht.php"); } } ?> Second page with table: <?php $name= $_SESSION["name"]; $surface= $_SESSION["surface"]; $floor= $_SESSION["floor"]; $phone= $_SESSION["phone"]; $network= $_SESSION["network"]; echo "<table class=\"tableExpo\">"; echo "<th>name</th>"; echo "<th>surface</th>"; echo "<th>floor</th>"; echo "<th>phone</th>"; echo "<th>network</th>"; echo "<th>total price</th>"; for($i=0; $i <= $_SESSION["name"]; $i++) { echo "<tr>"; echo "<td>$name</td>"; // gives right output echo "<td>$surface</td>"; // gives right output echo "<td>...</td>"; //wrong output (ment for checkbox 1) echo "<td>...</td>"; //wrong output (ment for checkbox 2) echo "<td>...</td>"; //wrong output (ment for checkbox 3) echo "<td>....</td>"; echo "</tr>;"; } echo "</table>"; <form action="<?php echo $_SERVER["PHP_SELF"]; ?>" method="post" id="form1"> <h1>Vul de gegevens in</h1> <table> <tr> <td>Name:</td> <td><input type="text" name="name" size="18"/></td> </tr> <tr> <td>Surface(in m^2):</td> <td><input type="text" name="surface" size="6"/></td> </tr> <tr> <td>Floor:</td> <td><input type="checkbox" name="floor" value="floor"/></td> </tr> <tr> <td>Phone:</td> <td><input type="checkbox" name="phone" value="phone"/></td> </tr> <tr> <td>Network:</td> <td><input type="checkbox" name="network" value="network"/></td> </tr> <tr> <td><input type="submit" name="verzenden" value="Verzenden"/></td> </tr> </table> There might be a few spelling mistakes since i had to translate it. Best regards.

    Read the article

  • ??Recovery Manager (RMAN)??

    - by ??
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;} ????,RMAN?Oracle?????????????????????,?RMAN?????????????????,?????????????? ?????,RMAN????? RMAN???????????? RMAN???/?????????? ???????????? ????1487262.1???????????,?????RMAN??/????????4??????: ???? ???? ???? ??????? ???????????????,???????SQLPLUS???,???????????“DD-MON-RR HH24:MI:SS”???????,??????????? ??: SQL>spool monitor.out SQL>@monitor '06-aug-12 16:38:03' ?????RMAN???????????,?????????,?????? - ???????????????????,?????RMAN?????? ???????4??????????????? ???? ??DBA????????V$SESSION_LONGOPS???????????????????????,?????????????????? ??: SID CH CONTEXT SOFAR TOTALWORK % Complete --- -- ------- ------ ---------- ---------- 16 t1 1 181950 1186752 15.33 148 t2 1 249722 422400 59.12 ??: - ???????????????? - ??????????????? ???? ??????RMAN?????????????????????????????????????????????????????????Oracle???????????????????????? ??: SID CH SEQ# EVENT STATE SECONDS --- -- ---- --------------------------------- ------- ------- 16 t1 5735 RMAN backup & recovery I/O WAITING 10 143 8200 SQL*Net message from client WAITING 258.83 148 t2 7941 Backup: MML create a backup piece WAITING 196.13 ?????????T2???????????????,???????T2????????????????? ??: -?CH??????RMAN???,?Rman???????????,????????? -??SEQ#?????????,???????? -?????????,??????SEQ#???? -????????????????RMAN backup & recovery I/O'??????????????IO???? ???? ???????????????????????????????,???????backup_tape_io_slaves=TRUE,?????????????????? ??: SID CH STATUS OPEN_TIME SOFAR Mb TOTMB IO_COUNT % Complete TYPE FILENAME --- -- ----------- ------------------ -------- ------ ---------- -------- ------ ------------- 19 t1 FINISHED 02-aug-12 17:28:42 567.5 567.5 569 100.00 INPUT users01.dbf 19 t1 IN PROGRESS 02-aug-12 17:28:42 3489.99 8704 3490 40.10 INPUT sh.dbf 19 t1 IN PROGRESS 02-aug-12 17:28:42 3778 15113 OUTPUT kvnhlb22_1_1 20 t2 FINISHED 02-aug-12 17:28:38 740 740 742 100.00 INPUT system01.dbf 20 t2 FINISHED 02-aug-12 17:28:38 880 880 882 100.00 INPUT sysaux01.dbf 20 t2 FINISHED 02-aug-12 17:28:38 1680 1680 1682 100.00 INPUT undotbs01.dbf 20 t2 FINISHED 02-aug-12 17:28:38 3007.25 12029 OUTPUT l0nhlb23_1_1 ??: - ???????????????,??RMAN??????IO,??????????IO??,??(Document 360443.1 RMAN Backup Performance) - ?? IO_COUNT??????????? ??????? ?????,????????, ????????????v$backup_sync_io ?????????. ??: SID CH FILENAME TYPE STATUS BSZ BFC OPEN IO_COUNT --- -- ------------ ------ ----------- ------ --- ------------------ -------- 16 t1 ksnhla1b_1_1 OUTPUT IN PROGRESS 262144 4 02-aug-12 17:12:20 1092 148 t2 ktnhla1c_1_1 OUTPUT IN PROGRESS 262144 4 02-aug-12 17:12:15 2968 ??: -?????????????????,??RMAN??????IO -?? IO_COUNT?????????? -??IO_COUNT ????, ???IO_COUNT ????????????,???????????????,????????????? ????: ??????????????? Document 1487262.1 Script to monitor RMAN Backup and Restore Operations Document 144640.1   RMAN: Monitoring Recovery Manager Jobs Document 360443.1   RMAN Backup Performance Document 740911.1   RMAN Restore Performance ??,?????My Oracle Support Database Backup and Recovery community ?Oracle?????????????/??????

    Read the article

  • 12.04 Unity 3D Not working where as Unity 2D works fine

    - by Stephen Martin
    I updated from 11.10 to 12.04 using the distribution upgrade, after that I couldn't log into using the Unity 3D desktop after logging in I would either never get unity's launcher or I would get the launcher and once I tried to do anything the windows lost their decoration and nothing would respond. If I use Unity 2D it works fine and in fact I'm using it to type this. I managed to get some info out of dmesg that looks like it's the route of whats happening. dmesg: [ 109.160165] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [ 109.160180] [drm] capturing error event; look for more information in /debug/dri/0/i915_error_state [ 109.167587] [drm:i915_wait_request] *ERROR* i915_wait_request returns -11 (awaiting 1226 at 1218, next 1227) [ 109.672273] [drm:i915_reset] *ERROR* Failed to reset chip. output of lspci | grep vga 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) output of /usr/lib/nux/unity_support_test -p IN 12.04 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no The same command in 11.10: stephenm@mcr-ubu1:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Mobile Intel® GM45 Express Chipset OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes stephenm@mcr-ubu1:~$ output of /var/log/Xorg.0.log [ 11.971] (II) intel(0): EDID vendor "LPL", prod id 307 [ 11.971] (II) intel(0): Printing DDC gathered Modelines: [ 11.971] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 12.770] (II) intel(0): Allocated new frame buffer 2176x800 stride 8704, tiled [ 15.087] (II) intel(0): EDID vendor "LPL", prod id 307 [ 15.087] (II) intel(0): Printing DDC gathered Modelines: [ 15.087] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 33.310] (II) XKB: reuse xkmfile /var/lib/xkb/server-93A39E9580D1D5B855D779F4595485C2CC66E0CF.xkm [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 41.519] [mi] Increasing EQ size to 512 to prevent dropped events. [ 42.079] (EE) intel(0): Detected a hung GPU, disabling acceleration. [ 42.079] (EE) intel(0): When reporting this, please include i915_error_state from debugfs and the full dmesg. [ 42.598] (II) intel(0): EDID vendor "LPL", prod id 307 [ 42.598] (II) intel(0): Printing DDC gathered Modelines: [ 42.598] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 51.052] (II) AIGLX: Suspending AIGLX clients for VT switch I know im using the beta version so I'm not expecting it to work but does any one know what the problem may be or even why they Unity compatibility test is describing my video card as vmware when its an intel i915 and Ubuntu is running on the metal not virtualised. Unity 3D worked fine in 11.10

    Read the article

  • Unity 3D Not working where as Unity 2D works fine [closed]

    - by Stephen Martin
    I updated from 11.10 to 12.04 using the distribution upgrade, after that I couldn't log into using the Unity 3D desktop after logging in I would either never get unity's launcher or I would get the launcher and once I tried to do anything the windows lost their decoration and nothing would respond. If I use Unity 2D it works fine and in fact I'm using it to type this. I managed to get some info out of dmesg that looks like it's the route of whats happening. dmesg: [ 109.160165] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [ 109.160180] [drm] capturing error event; look for more information in /debug/dri/0/i915_error_state [ 109.167587] [drm:i915_wait_request] *ERROR* i915_wait_request returns -11 (awaiting 1226 at 1218, next 1227) [ 109.672273] [drm:i915_reset] *ERROR* Failed to reset chip. output of lspci | grep vga 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) output of /usr/lib/nux/unity_support_test -p IN 12.04 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no The same command in 11.10: stephenm@mcr-ubu1:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Mobile Intel® GM45 Express Chipset OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes stephenm@mcr-ubu1:~$ output of /var/log/Xorg.0.log [ 11.971] (II) intel(0): EDID vendor "LPL", prod id 307 [ 11.971] (II) intel(0): Printing DDC gathered Modelines: [ 11.971] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 12.770] (II) intel(0): Allocated new frame buffer 2176x800 stride 8704, tiled [ 15.087] (II) intel(0): EDID vendor "LPL", prod id 307 [ 15.087] (II) intel(0): Printing DDC gathered Modelines: [ 15.087] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 33.310] (II) XKB: reuse xkmfile /var/lib/xkb/server-93A39E9580D1D5B855D779F4595485C2CC66E0CF.xkm [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 41.519] [mi] Increasing EQ size to 512 to prevent dropped events. [ 42.079] (EE) intel(0): Detected a hung GPU, disabling acceleration. [ 42.079] (EE) intel(0): When reporting this, please include i915_error_state from debugfs and the full dmesg. [ 42.598] (II) intel(0): EDID vendor "LPL", prod id 307 [ 42.598] (II) intel(0): Printing DDC gathered Modelines: [ 42.598] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 51.052] (II) AIGLX: Suspending AIGLX clients for VT switch I know im using the beta version so I'm not expecting it to work but does any one know what the problem may be or even why they Unity compatibility test is describing my video card as vmware when its an intel i915 and Ubuntu is running on the metal not virtualised. Unity 3D worked fine in 11.10

    Read the article

  • How To Rip an Audio CD to FLAC with Foobar2000

    - by Mysticgeek
    Foobar2000 is a great audio player that is fully customizable, is light on system resources, and contains a lot of tools and features. Today we show you how to use it to rip an audio CD to FLAC format. Note: For this tutorial we’re going to assume this is the first time you’re ripping a disc with Foobar2000. We’re running it on Windows 7 Ultimate 64-bit. Install Foobar2000 and FLAC First download and install Foobar2000 (link below). The main thing you’ll want to make sure to enable during the install process is Audio CD Support… And the freedb Tagger which are located under Optional Features, then continue through the rest of the install wizard. Next you need to install the latest version of the FLAC codec (link below) following the defaults. Rip Audio CD To rip a CD, place it in your CDROM drive, launch Foobar2000 and click File \ Open Audio CD. Select the appropriate CD drive and click the Rip button. Next you’ll want to lookup the disc information with freedb…or you can manually enter in the track data if it’s a custom disc. Select the proper tag information in the freedb tagger window, then click Update files. The data will be entered in, make sure the radio button next to Go to the Converter Setup dialog is selected, and click the Rip button. In the Converter Setup screen, here you can select the output format, where in our case we’re selecting FLAC. In this window you can choose several other options like the output path, merging the tracks into one or individual files…etc. When you have those settings completed click OK. Next you’ll need to find flac.exe which is located wherever you installed it. On our 64-bit Windows 7 system the default path is C:\Program Files (x86)\FLAC Now wait while your CD is ripped and converted to FLAC. You’ll get a Converter Status Report…after you’ve checked it over you can close out of it. If you set the option to show the output files after conversion you can take a look, make sure all tracks were converted, and play them right away if you want. You can play the tracks in Foobar2000 or any player that supports FLAC. If you want to use WMC or WMP see our article on how to play FLAC files in Windows 7 Media Center or Player. That’s all there is to it! If you’re a fan of Foobar2000 and enjoy your music converted to FLAC format, Foobar2000 does the job quite well. There are a lot of customizations and tools you can use in Foobar2000 that we’ll be taking a look at in future articles. For more information check out our look at this fully customizable music player. Foobar2000 run on XP, Vista, and Windows 7 Links Download Foobar2000 Download FLAC Similar Articles Productive Geek Tips Using Ubuntu: What Package Did This File Come From?Easily Change Audio File Formats with XRECODEFoobar2000 is a Fully Customizable Music PlayerConvert Virtually Any Audio Format with XRECODE IIExtract Audio from a Video File with Pazera Free Audio Extractor TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Ubuntu 10.04 & IBM DS3524 with FC multipath, inactive path is [failed][faulty] instead of [active][ghost]

    - by Graeme Donaldson
    OK, this is my setup: FC Switches IBM/Brocade, Switch1 and Switch2, independent fabrics. Server IBM x3650 M2, 2x QLogic QLE2460, 1 connected to each FC Switch. Storage IBM DS3524, 2x controllers with 4x FC ports each, but only 2x connected on each. +-----------------------------------------------------------------------+ | HBA1 Server HBA2 | +-----------------------------------------------------------------------+ | | | | | | +-----------------------------+ +------------------------------+ | Switch1 | | Switch2 | +-----------------------------+ +------------------------------+ | | | | | | | | | | | | | | | | | | | | +-----------------------------------+-----------------------------------+ | Contr A, port 3 | Contr A, port 4 | Contr B, port 3 | Contr B, port 4 | +-----------------------------------+-----------------------------------+ | Storage | +-----------------------------------------------------------------------+ My /etc/multipath.conf is from the IBM redbook for the DS3500, except I use a different setting for prio_callout, IBM uses /sbin/mpath_prio_tpc, but according to http://changelogs.ubuntu.com/changelogs/pool/main/m/multipath-tools/multipath-tools_0.4.8-7ubuntu2/changelog, this was renamed to /sbin/mpath_prio_rdac, which I'm using. devices { device { #ds3500 vendor "IBM" product "1746 FAStT" hardware_handler "1 rdac" path_checker rdac failback 0 path_grouping_policy multibus prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } multipaths { multipath { wwid xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx alias array07 path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback "5" rr_weight priorities no_path_retry "5" } } The output of multipath -ll with controller A as the preferred path: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ 5:0:2:0 sde 8:64 [active][ready] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ 6:0:2:0 sdh 8:112 [failed][faulty] If I change the preferred path using IBM DS Storage Manager to Controller B, the output swaps accordingly: root@db06:~# multipath -ll sdd: checker msg is "directio checker reports path is down" sde: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [failed][faulty] \_ 5:0:2:0 sde 8:64 [failed][faulty] \_ 6:0:1:0 sdg 8:96 [active][ready] \_ 6:0:2:0 sdh 8:112 [active][ready] According to IBM, the inactive path should be "[active][ghost]", not "[failed][faulty]". Despite this, I don't seem to have any I/O issues, but my syslog is being spammed with this every 5 seconds: Jun 1 15:30:09 db06 multipathd: sdg: directio checker reports path is down Jun 1 15:30:09 db06 kernel: [ 2350.282065] sd 6:0:2:0: [sdh] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:09 db06 kernel: [ 2350.282071] sd 6:0:2:0: [sdh] Sense Key : Illegal Request [current] Jun 1 15:30:09 db06 kernel: [ 2350.282076] sd 6:0:2:0: [sdh] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:09 db06 kernel: [ 2350.282083] sd 6:0:2:0: [sdh] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:09 db06 kernel: [ 2350.282092] end_request: I/O error, dev sdh, sector 0 Jun 1 15:30:10 db06 multipathd: sdh: directio checker reports path is down Jun 1 15:30:14 db06 kernel: [ 2355.312270] sd 6:0:1:0: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:14 db06 kernel: [ 2355.312277] sd 6:0:1:0: [sdg] Sense Key : Illegal Request [current] Jun 1 15:30:14 db06 kernel: [ 2355.312282] sd 6:0:1:0: [sdg] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:14 db06 kernel: [ 2355.312290] sd 6:0:1:0: [sdg] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:14 db06 kernel: [ 2355.312299] end_request: I/O error, dev sdg, sector 0 Does anyone know how I can get the inactive path to show "[active][ghost]" instead of "[failed][faulty]"? I assume that once I can get that right then the spam in my syslog will end as well. One final thing worth mentioning is that the IBM redbook doc targets SLES 11 so I'm assuming there's something a little different under Ubuntu that I just haven't figured out yet. Update: As suggested by Mitch, I've tried removing /etc/multipath.conf, and now the output of multipath -ll looks like this: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxdm-1 IBM ,1746 FASt [size=4.9T][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 5:0:2:0 sde 8:64 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ round-robin 0 [prio=0][enabled] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ round-robin 0 [prio=0][enabled] \_ 6:0:2:0 sdh 8:112 [failed][faulty] So its more or less the same, with the same message in the syslog every 5 minutes as before, but the grouping has changed.

    Read the article

  • Tool to convert blogger.com content to dasBlog

    - by Daniel Moth
    Due to blogger.com dropping FTP support, I've had to move my blog. If you are in a similar situation, this post will help you by showing you the necessary steps to take. Goals No loss on blog posts, comments AND all existing permalinks continue to work (redirect to the correct place). Steps Download the XML files corresponding to your blogger.com content and store them in a folder. Install and configure dasBlog on your local machine. Configure your web.config file (will need updating once you run step 4). Use the tool I describe further down to generate the content and place it at the right place. Test your site locally. Once you are happy, repeat step 2 on your hosting provider of choice. Remember to copy up your dasBlog theme folder if you created one. Copy up the local web.config file and the XML dasBlog content files generated by the tool of step 4. Test your site on the server. Once you are happy, go live (following instructions from your hoster). In my case, I gave the nameservers from my new hoster to my existing domain registrar and they made the switch. Tool (code) At step 4 above I referred to a tool. That is an overstatement, it is simply one 450-line C#code file that you can download here: BloggerToDasBlog.cs. I used this from a .NET 2.0 console app (and I run it under the Visual Studio debugger, i.e. F5) like this: Program.cs. The console app referenced the dasBlog 2.3 ASP.NET Blogging Engine i.e. the newtelligence.DasBlog.Runtime.dll assembly. Let me describe what the code does: Input: A path to a folder where the XML files from the old blogger.com blog reside. It can deal with both types of XML file. A full file path to a file where it creates XML redirect input (as required by the rewriteMap mentioned here). The blog URL. The author's email. The blog author name. A path to an empty folder where the new XML dasBlog content files will get created. The subfolder name used after the domain name in the URL. The 3 reg ex patterns to use. You can use the same as mine, but will need to tweak the monthly_archive rule. Again, to see what values I passed for all the above, see my Program.cs file. Output: It creates dasBlog XML files in the folder specified. It creates those by parsing the old blogger.com XML files that reside in the folder specified. After that is generated, copy it to the "Content" folder under your dasBlog installation. It creates an XML file with a single ignorable root element and a bunch of inner XML elements. You can copy paste these in the web.config file as discussed in this post. Other notes: For each blog post, it detects outgoing links to itself (i.e. to the same blog), and rewrites those to point to the new URLs. So internal links do not rely on the web.config redirects. It deals with duplicate post titles; it does not deal with triplicates and higher. Removes all references to blogger.com (e.g. references to [email protected], the injected hidden footer for statistics that each blog post has and others – see the code). It creates a lot of diagnostic output (in the Output window) and indeed the documentation for the code is in the Debug.WriteLine statements ;) This is not code I will maintain or support – it was a throwaway one-use project that I am sharing here as a starting point for anyone finding themselves in the same boat that I was. Enjoy "as is". Comments about this post welcome at the original blog.

    Read the article

  • Where Next for Google Translate? And What of Information Quality?

    - by ultan o'broin
    Fascinating article in the UK Guardian newspaper called Can Google break the computer language barrier? In it, Andreas Zollman, who works on Google Translate, comments that the quality of Google Translate's output relative to the amount of data required to create that output is clearly now falling foul of the law of diminishing returns. He says: "Each doubling of the amount of translated data input led to about a 0.5% improvement in the quality of the output," he suggests, but the doublings are not infinite. "We are now at this limit where there isn't that much more data in the world that we can use," he admits. "So now it is much more important again to add on different approaches and rules-based models." The Translation Guy has a further discussion on this, called Google Translate is Finished. He says: "And there aren't that many doublings left, if any. I can't say how much text Google has assimilated into their machine translation databases, but it's been reported that they have scanned about 11% of all printed content ever published. So double that, and double it again, and once more, shoveling all that into the translation hopper, and pretty soon you get the sum of all human knowledge, which means a whopping 1.5% improvement in the quality of the engines when everything has been analyzed. That's what we've got to look forward to, at best, since Google spiders regularly surf the Web, which in its vastness dwarfs all previously published content. So to all intents and purposes, the statistical machine translation tools of Google are done. Outstanding job, Googlers. Thanks." Surprisingly, all this analysis hasn't raised that much comment from the fans of machine translation, or its detractors either for that matter. Perhaps, it's the season of goodwill? What is clear to me, however, of course is that Google Translate isn't really finished (in any sense of the word). I am sure Google will investigate and come up with new rule-based translation models to enhance what they have already and that will also scale effectively where others didn't. So too, will they harness human input, which really is the way to go to train MT in the quality direction. But that aside, what does it say about the quality of the data that is being used for statistical machine translation in the first place? From the Guardian article it's clear that a huge humanly translated corpus drove the gains for Google Translate and now what's left is the dregs of badly translated and poorly created source materials that just can't deliver quality translations. There's a message about information quality there, surely. In the enterprise applications space, where we have some control over content this whole debate reinforces the relationship between information quality at source and translation efficiency, regardless of the technology used to do the translation. But as more automation comes to the fore, that information quality is even more critical if you want anything approaching a scalable solution. This is important for user experience professionals. Issues like user generated content translation, multilingual personalization, and scalable language quality are central to a superior global UX; it's a competitive issue we cannot ignore.

    Read the article

  • &lsquo;Publish&hellip;&rsquo; Resulting in Directory With No Files

    - by ToStringTheory
    I was pulling my hair out with this one…  Which isn’t good considering I have so little of it left!  I had just upgraded to the Windows Azure 1.7 SDK the day before with no problems, and used the upgraded ‘Publish…’ dialog to successfully publish a website to my hard disk for hosting on an internal development server.  However, when trying to deploy another project to my file system, it said it was successful, but there were no files in the directory.  The only difference, the first project was an Azure project, the second was a standard ASP.Net Web Application.  If you installed the Windows Azure 1.7 SDK, you may want to read this. The Problem At first it appears that there is no problem: However you may remember that when publishing a web application, the output window will generally iterate through each of the directories as it copies the files from that directory over.  Sure enough, when looking at the output directory – there are no files, no bin directory, no nothing… Troubleshooting Since one site published and the other did not, I believed that the failure may have been to a failed SQL Server 2012 installation that happened between publish.  I rolled back the installation, however that did not work either.  I also checked the Configuration Manager dialog, and ensured that the projects were selected to actually build (just checking, even though the output said it built them..)  I checked the properties of the solution and the projects, and a selection of files in the project to make sure that they were selected for content…  Nothing seemed to work. I then decided to uninstall the Azure 1.7 SDK to see if that was the culprit.  When I opened the Windows 7 ‘Uninstall a Program’ dialog, I noticed that the Azure SDK came with 2 extra packages that just so happen to be in a Release Candidate state from Microsoft – ‘Microsoft Web Deploy 3.0’ and ‘Microsoft Web Publish – Visual Studio 2010’.  It dawned on me that the publish dialog must not be just for Azure, since it appeared when I tried to deploy the regular web application as well.  Therefore, it must have been an upgrade to the publish mechanism in Visual Studio.  I uninstalled both of the programs and received my old publish dialog once again, and was able to successfully publish the solution above as I had done before. After celebrating solving the problem, I tried reinstalling the Azure package, to see if it would repair the publishing process. Even though it brought back the updated dialogs, it did not publish any files. Instead of uninstalling and retreating, I now KNEW what the cause was, and these were packages not just for Azure. I now knew a product name to search for. The Solution Sure enough, with the correct search term in Google – ‘microsoft web publish no files’, and setting the timeline to 1 week, I found what I needed - Microsoft Connect - Publish Web Application FAILS! (by Andrew Rits). I am surprised that I missed something that ended up being so simple…  In the Configuration Manager, I had the following settings: This is how I had been building and debugging the solution always…  However, apparently when installing the new Web Publishing package, it does things a little differently in its configuration for publishing: You see the difference?  The configuration here is set to ‘x86’ instead of ‘Any CPU’.  Sure enough, as soon as I switched the configuration to ‘Release – Any CPU’, the deployment built and published all of my files as I expected. Conclusion It was a small change, but apparently the new ‘Publish web application’ defaults to the x86 configuration, thereby not copying any of the project/bin files to the publish target directory.  I spent forever trying things, but this small drop down eluded me until I was able to target that the dialog was actually working apparently, I just didn’t have the correct configuration. I hope that this saves you the hours of frustration and hastened hair loss that it caused me…  I also hope that before Microsoft brings this publishing package out of RC status, that they change the behavior of that menu to default to the settings of the old publish menu for the first time. Happy Coding!

    Read the article

  • Tuple in C# 4.0

    - by Jalpesh P. Vadgama
    C# 4.0 language includes a new feature called Tuple. Tuple provides us a way of grouping elements of different data type. That enables us to use it a lots places at practical world like we can store a coordinates of graphs etc. In C# 4.0 we can create Tuple with Create method. This Create method offer 8 overload like following. So you can group maximum 8 data types with a Tuple. Followings are overloads of a data type. Create(T1)- Which represents a tuple of size 1 Create(T1,T2)- Which represents a tuple of size 2 Create(T1,T2,T3) – Which represents a tuple of size 3 Create(T1,T2,T3,T4) – Which represents a tuple of size 4 Create(T1,T2,T3,T4,T5) – Which represents a tuple of size 5 Create(T1,T2,T3,T4,T5,T6) – Which represents a tuple of size 6 Create(T1,T2,T3,T4,T5,T6,T7) – Which represents a tuple of size 7 Create(T1,T2,T3,T4,T5,T6,T7,T8) – Which represents a tuple of size 8 Following are some example code for tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple); var t = System.Tuple.Create<int, string>(1, "Jalpesh"); Console.WriteLine(t); } } } Following is a output of above as expected. You can also access values insides Tuple with ItemN property. Where N represents particular number of item in tuple. Following is an example of it. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Here you can see I have printed items with Item1,Item2 and Item3 . Following is the output of above code.   Even we can create a nested tuple also following is code for nested tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create(1,"Jalpesh",new Tuple<string,string>("P","Vadgama")); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Following is a output above code as expected. As you can see there are unlimited possibilities we can do lots of things with Tuple. Hope you liked it. Stay tuned for more. Till then Happy Programming!!

    Read the article

  • Why is my machine unable to mount my SMB drives ("CIFS VFS: Error connecting to socket. Aborting operation", return code -115)?

    - by downbeat
    I have a machine running Precise (12.04 x64), and I cannot mount my SMB drives (I have 3, we'll call them public, private and download). It used to work (a week or two ago) and I didn't touch fstab! The machine hosting the shares is a commercial NAS, and I'm not seeing anything that would indicate it's an issue with the NAS. I have an older machine which I updated to Precise at the same time (both fresh installed, not dist-upgrade), so should have a very similar configuration. It is not having any problems. I am not having problems on windows machines/partitions either, only one of my Precise machines. The two machines are using identical entries in fstab and identical /etc/samba/smb.conf files. I don't think I've ever changed smb.conf (has never mattered before). My fstab entries all basically look like this: //10.1.1.111/public /media/public cifs credentials=/home/downbeat/.credentials,iocharset=utf8,uid=downbeat,gid=downbeat,file_mode=0644,dir_mode=0755 0 0 Here's the dmesg output on boot: [ 51.162198] CIFS VFS: Error connecting to socket. Aborting operation [ 51.162369] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.194106] CIFS VFS: Error connecting to socket. Aborting operation [ 51.194250] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.198120] CIFS VFS: Error connecting to socket. Aborting operation [ 51.198243] CIFS VFS: cifs_mount failed w/return code = -115 There are no other errors I see in the dmesg output. Originally when I ran 'testparm -s', the output contained these lines ERROR: lock directory /var/run/samba does not exist ERROR: pid directory /var/run/samba does not exist Here's the samba related programs I have installed: $ dpkg --list|grep -i samba ii libpam-winbind 2:3.6.3-2ubuntu2.3 Samba nameservice and authentication integration plugins ii libwbclient0 2:3.6.3-2ubuntu2.3 Samba winbind client library ii nautilus-share 0.7.3-1ubuntu2 Nautilus extension to share folder using Samba ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii samba-common 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii samba-common-bin 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii winbind 2:3.6.3-2ubuntu2.3 Samba nameservice integration server $ dpkg --list|grep -i smb ii dmidecode 2.11-4 SMBIOS/DMI table decoder ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix ii smbfs 2:5.1-1ubuntu1 Common Internet File System utilities - compatibility package $ dpkg --list|grep -i cifs ii cifs-utils 2:5.1-1ubuntu1 Common Internet File System utilities ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix I originally noticed that my other machine had "libpam-winbind" and "nautilus-share" installed and the machine with the issue did not. Installing those two packages solved my errors with 'testparm -s', but did not fix my issue. Finally, I tried to purge and reinstall these packages smbclient smbfs cifs-utils samba-common samba-common-bin Still no luck. Again, it used to work; now it doesn't. Very similarly configured machine works (but some packages are out of date on the working machine). The NAS has only one interface/IP address, nmblookup works to find it's IP from it's hostname (from the machine with the issue) and it responds to a ping. Please any help would be great. I've been searching on AskUbuntu, SuperUser, ubuntuforums and plain old search engines for a week now and it's driving me crazy!

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >