Search Results

Search found 59 results on 3 pages for '16bit'.

Page 1/3 | 1 2 3  | Next Page >

  • C# console applications all 16bit?

    - by Jon
    Hi all, I was reading up about NTVDM.exe as I build a quick test console app and it crashed on a friends machine complaining about this EXE. As I understand it all DOS cmd windows (C# console apps included) run as 16bit not 32bit. Is this true? Does this mean all my works console app back office apps are running as 16bit rather than making the most of the 32bit available? What about Windows services? As I believe we wrote it as a console app then made it run as a windows service? Thanks

    Read the article

  • Virtual terminal with at least 16bit color?

    - by trusktr
    It's 2013 and there should be terminals with more than 256 colors by now (you'd think). Are there any terminals with more than 8bit colors (usually from 8 bit we jump to 16bit) to use with linux? Ideally, I'd like to have a linux terminal with 32bit colors for the foreground and background of each character. So, the functionality would be the same, with escape sequences, but just more of them (or something like that?).

    Read the article

  • Server 2008 Print Redirection is failing but only on 16Bit apps

    - by ian
    the main programmer for SoEasyAccounting and we are installing to Server 2008 Standard service pack 1. We install to 2003 with no problems. Important to understand that the print failure only happens in certain circumstances: Note: We use a standard Windows printer selection box to choose the printer Terms used Superbase = a program language that uses ntvdm.exe (Windows process hosting 16 bit apps) Local Printer = printing to a driver loaded onto the Server 2008 Redirect Printer = printing to a automatically established remote printer through an RDP connection Printing Scenarios Server 2008 - 1: Print from notepad to a Redirected Printer = works Server 2008 - 2: Print from Superbase to a Local Printer = works Server 2008 - 3: Print from Superbase to a Redirected Printer = fail Server 2003 - 4: Print from Superbase to a Redirected Printer = works Results The print causes a message in the drivers print queue of Local Downlevel Document, no print though and Superbase recognises that the "Print command failed". Eventvwr has no related issues to the print fail Any help greatly appreciated. So far 2 days spent trying to resolve and here goes my weekend :( unless someone has an idea :) Things I have Tried i. Switching on/off Easy print ii. Loading copy of redirected driver on server

    Read the article

  • Correct way to Convert 16bit PCM Wave data to float

    - by fredley
    I have a wave file in 16bit PCM form. I've got the raw data in a byte[] and a method for extracting samples, and I need them in float format, i.e. a float[] to do a Fourier Transform. Here's my code, does this look right? I'm working on Android so javax.sound.sampled etc. is not available. private static short getSample(byte[] buffer, int position) { return (short) (((buffer[position + 1] & 0xff) << 8) | (buffer[position] & 0xff)); } ... float[] samples = new float[samplesLength]; for (int i = 0;i<input.length/2;i+=2){ samples[i/2] = (float)getSample(input,i) / (float)Short.MAX_VALUE; }

    Read the article

  • Procesing 16bit sample audio

    - by user2431088
    Right now i have an audio file (2 Channels, 44.1kHz Sample Rate, 16bit Sample size, WAV) I would like to pass it into this method but i am not sure of any way to convert the WAV file to a byte array. /// <summary> /// Process 16 bit sample /// </summary> /// <param name="wave"></param> public void Process(ref byte[] wave) { _waveLeft = new double[wave.Length / 4]; _waveRight = new double[wave.Length / 4]; if (_isTest == false) { // Split out channels from sample int h = 0; for (int i = 0; i < wave.Length; i += 4) { _waveLeft[h] = (double)BitConverter.ToInt16(wave, i); _waveRight[h] = (double)BitConverter.ToInt16(wave, i + 2); h++; } } else { // Generate artificial sample for testing _signalGenerator = new SignalGenerator(); _signalGenerator.SetWaveform("Sine"); _signalGenerator.SetSamplingRate(44100); _signalGenerator.SetSamples(16384); _signalGenerator.SetFrequency(5000); _signalGenerator.SetAmplitude(32768); _waveLeft = _signalGenerator.GenerateSignal(); _waveRight = _signalGenerator.GenerateSignal(); } // Generate frequency domain data in decibels _fftLeft = FourierTransform.FFTDb(ref _waveLeft); _fftRight = FourierTransform.FFTDb(ref _waveRight); }

    Read the article

  • Hash 32bit int to 16bit int?

    - by dkamins
    What are some simple ways to hash a 32-bit integer (e.g. IP address, e.g. Unix time_t, etc.) down to a 16-bit integer? E.g. hash_32b_to_16b(0x12345678) might return 0xABCD. Let's start with this as a horrible but functional example solution: function hash_32b_to_16b(val32b) { return val32b % 0xffff; } Question is specifically about JavaScript, but feel free to add any language-neutral solutions, preferably without using library functions. Simple = good. Wacky+obfuscated = amusing.

    Read the article

  • Making a DVD video with a still image and PCM 16bit audio with ffmpeg

    - by João
    I'm trying to make a small video with a still image and a sound file playing in the background to pass it to dvdauthor and create a DVD. The command I'm using is this: ffmpeg -loop_input -i image.jpg -qscale 2 -i song.flac -aspect 4:3 -target pal-dvd -acodec pcm_s16le -shortest output.mpg However, the resulting video file doesn't have sound at all (testing it on VLC Player). I don't know if I can't combine "-acodec pcm_s16le" with "-target pal-dvd" to override the later, or if there is something else wrong with the command. If I try without the "-acodec pcm_s16le" parameter the video and audio works, I can even create a DVD ISO with it. However, the audio stays as AC3. I wanted to include with the video the lossless audio, not a compressed one. I suppose the DVD standart allows to have PCM audio in it, am I right?

    Read the article

  • How to check if two System.Drawing.Color structures represent the same color in 16 bit color depth?

    - by David
    How can I check if two System.Drawing.Color structures represent the same color in 16 bit color depth (or generally based on the value of Screen.PrimaryScreen.BitsPerPixel)? Let's say I set Form.TransparencyKey to Value1 (of Color type), I want to check that when the user selects a new background color for the form (Value2), I don't set the entire form transparent. On 32bit color depth screens I simply compare the two values: if (Value1 == Value2) However, this does not work on 16bit color depth screens, as more Color values for the Value2 would represent the same actual 16bit color as Value1, as I found out the hard way.

    Read the article

  • Why 64 bit OS can't run a 16 bit application?

    - by Bob
    Why is it that a 32 bit OS that is installed on a 64 bit CPU can run old DOS 16 bit applications, but if you install a 64 bit OS it cant run those applications directly and need some sort of emulation (that doesn't always work perfectly)? To be more specific I have an Intel Core 2 Due (64bit) procesor, and I had Windows Xp and Windows 7 (both 32bit) installed and it could run old dos applications, but now that I have installed Windows 7 64 bit it can't run those same application anymore?

    Read the article

  • Ways to divide the high/low byte from a 16bit address?

    - by Grissiom
    Hello, I'm developing a software on 8051 processor. A frequent job is to divide the high and low byte of a 16bit address. I want to see there are how many ways to achieve it. The ways I come up so far are: (say ptr is a 16bit pointer, and int is 16bit int) ADDH = (unsigned int) ptr >> 8; ADDL = (unsigned int) ptr & 0x00FF; and ADDH = ((unsigned char *)&ptr)[0]; ADDL = ((unsigned char *)&ptr)[1]; Does anyone have any other bright ideas? ;) And anyone can tell me which way is more efficient?

    Read the article

  • Read half precision float (float16 IEEE 754r) binary data in matlab

    - by Michael
    you have been a great help last time, i hope you can give me some advise this time, too. I read a binary file into matlab with bit16 (format = bitn) and i get a string of ones and zeros. bin = '1 00011 1111111111' (16 bits: 1. sign, 2-6. exponent, 7-16. mantissa) According to ftp://www.fox-toolkit.org/pub/fasthalffloatconversion.pdf it can be 'converted' like out = (-1)^bin(1) * 2^(bin(2:6)-15) * 1.bin(7:16) [are exponent and mantissa still binary?] Can someone help me out and tell me how to deal with the 'eeeee' and '1.mmmmmmmmmm' as mentioned in the pdf, please. Thanks a lot! Michael

    Read the article

  • Java error on bilinear interpolation of 16 bit data

    - by Jon
    I'm having an issue using bilinear interpolation for 16 bit data. I have two images, origImage and displayImage. I want to use AffineTransformOp to filter origImage through an AffineTransform into displayImage which is the size of the display area. origImage is of type BufferedImage.TYPE_USHORT_GRAY and has a raster of type sun.awt.image.ShortInterleavedRaster. Here is the code I have right now displayImage = new BufferedImage(getWidth(), getHeight(), origImage.getType()); try { op = new AffineTransformOp(atx, AffineTransformOp.TYPE_BILINEAR); op.filter(origImage, displayImage); } catch (Exception e) { e.printStackTrace(); } In order to show the error I have created 2 gradient images. One has values in the 15 bit range (max of 32767) and one in the 16 bit range (max of 65535). Below are the two images 15 bit image 16 bit image These two images were created in identical fashions and should look identical, but notice the line across the middle of the 16 bit image. At first I thought that this was an overflow problem however, it is weird that it's manifesting itself in the center of the gradient instead of at the end where the pixel values are higher. Also, if it was an overflow issue than I would suspect that the 15 bit image would have been affected as well. Any help on this would be greatly appreciated.

    Read the article

  • 80x86 16-bit asm: lea cx, [cx*8+cx] causes error on NASM (compiling .com file)

    - by larz
    Title says it all. The error NASM gives (dispite my working OS) is "invalid effective address". Now i've seen many examples of how to use LEA and i think i gots it right but yet my NASM dislikes it. I tried "lea cx, [cx+9]" and it worked; "lea cx, [bx+cx]" didn't. Now if i extended my registers to 32-bits (i.e. "lea ecx, [ecx*8+ecx]") everything would be well but i am restricted to use 16- and 8-bit registers only. Is here anyone so knoweledgeable who could explain me WHY my assembler doesn't let me use lea the way i supposed it should be used? Thanks.

    Read the article

  • Building 16 bit os - character array not working

    - by brainbarshan
    Hi. I am building a 16 bit operating system. But character array does not seem to work. Here is my example kernel code: asm(".code16gcc\n"); void putchar(char); int main() { char *str = "hello"; putchar('A'); if(str[0]== 'h') putchar('h'); return 0; } void putchar(char val) { asm("movb %0, %%al\n" "movb $0x0E, %%ah\n" "int $0x10\n" : :"m"(val) ) ; } It prints: A that means putchar function is working properly but if(str[0]== 'h') putchar('h'); is not working. I am compiling it by: gcc -fno-toplevel-reorder -nostdinc -fno-builtin -I./include -c -o ./bin/kernel.o ./source/kernel.c ld -Ttext=0x9000 -o ./bin/kernel.bin ./bin/kernel.o -e 0x0 What should I do?

    Read the article

  • Is it only possible to display 64k vertices on the monitor with 16bit?

    - by Aufziehvogel
    I did the first 3D tutorial over at riemers.net and stumbled upon that my graphic card only supports Shader 2.0 (Reach profile in XNA) which means I can only use Int16 to store the indices (triangle to vertex). This means that I can only store 2^16 = 65536 vertices. Also I read on the internet that you should prefer 16-bit over 32-bit because not all hardware (like mine) does support 32-bit. Yet, I am wondering: Do really all game scenes get along with only so little vertices? I though already faces of people used a lot of polygons (which are made up of vertices?). It’s not relevant for me yet, but I am interested: Do game scenes use only 65536 vertices? Do you use some trade-off to display more (e.g. 64k in GPU buffer rest on RAM) Is there some method to get more into the GPU buffer? I already read on some other posts that there seems to be a limit of 64k per mesh too, so maybe you can compact stuff to meshes?

    Read the article

  • Picking a suitable resolution for a modern low-res game?

    - by MrKatSwordfish
    I'm working on a 2D game project right now (using SFML+OpenGL and C++) and I'm trying to figure out how to go about choosing a resolution. I want my game to have a pixel resolution that is around that of classic '16bit' era consoles like the Super Nintendo or Neo Geo. However, I'd also like to have my game fit the 16:9 aspect ratio that most modern PC monitors use. Finally I'd like to be able to include an option for running full screen. I know that I could create my own low-res 16:9 resolution that is more-or-less around the size of SNES or NeoGeo games. However, the problem seems to be that doing so would leave me with a non-standard resolution that my monitor would not be able to support in fullscreen mode. For example, if i divide the common 16:9 resolution 1920x1080 by 4, I would get a 16:9 resolution that is relatively close to the resolution used by 16bit era games; 480x270. That would be fine in a windowed mode, but I don't think that it would be supported in fullscreen mode. How can I choose a resolution that suits my needs? Can I use something like 480x270? If so, how would I go about getting fullscreen mode to work with such a non-standard resolution? (I'm guessing OpenGL/SFML might have a way of up-scaling...but..)

    Read the article

  • How does it matter if a character is 8 bit or 16 bit or 32 bit

    - by vin
    Well, I am reading Programing Windows with MFC, and I came across Unicode and ASCII code characters. I understood the point of using Unicode over ASCII, but what I do not get is how and why is it important to use 8bit/16bit/32bit character? What good does it do to the system? How does the processing of the operating system differ for different bits of character. My question here is, what does it mean to a character when it is a x-bit character?

    Read the article

  • I get GL_INVALID_VALUE after calling glTexSubImage2D

    - by user892644
    I am trying to figure out why my texture allocation does not work. Here is the code: glTexStorage2D(GL_TEXTURE_2D, 2, GL_RGBA8, 2048, 2048); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 2048, 2048, GL_RGB, GL_UNSIGNED_SHORT_5_6_5_REV, &BitMap[0]); glTexSubImage2D returns GL_INVALID_VALUE but the maximum texture allowed is 16384x16384 on my card. The source of the image is 16bit (Red 5, Green 6, Blue 5).

    Read the article

  • Convert WAV to PCM(wav)

    - by Marco
    Hi, I'm looking for a small PCM converter tool which I can access by Dos-Console. I have any wave-files and need always this output: PCM 44,1k, 16bit, Mono Is there any program for this? Thx 4 answers

    Read the article

  • What's the DCPU-16 thing all about?

    - by ChrisRamakers
    Ever since Notch (of Minecraft fame) announced his next project will include programmable 16bit cpu's ingame everybody seems to want to write VM's for the spec notch has written up. I've seen em written in C, C++, go, javascript, coffeescript, ... Can anyone enlighten me what's so special about the spec notch wrote up or is it just that it's the first game that actually contains a CPU ingame that you can do whatever you want with? It sparked my curiosity but I fail to grasp the thing that makes it so special suddenly everybody needs to write up code for it?

    Read the article

  • Asterisk not playing custom sounds on Ubuntu Server 11.04

    - by jochy2525
    I've installed Asterisk on my Ubuntu Server, all works fine, excepts playing the custom sounds. Asterisk sounds work, but this file I've uploaded does not play (on other servers it works, it is a .WAV PCM 16bit 8000). Here is some log output: [Feb 6 22:55:45] WARNING[11045] file.c: File custom/sohoitsoluciones does not exist in any format [Feb 6 22:55:45] WARNING[11045] file.c: Unable to open custom/sohoitsoluciones (format 0x4 (ulaw)): No such file or directory [Feb 6 22:55:45] WARNING[11045] app_playback.c: ast_streamfile failed on SIP/Out4903-0000001d for custom/sohoitsoluciones How can I get Asterisk to play a custom sound?

    Read the article

  • MSDOS "Hello World" EXE

    - by divinci
    Hi all, An open question - but I cant find anywhere to start!! I want to compile a "Hello World" MS-DOS exe. Not a program that runs in 16bit mode, or in MSDos mode on top of Windows OSs. A HELOWRLD.EXE that I can run on my MSDOS box. Thanksyou!

    Read the article

  • COM Local Server

    - by Gamer
    Hi, Similar to LocalServer32, which is used to specify the path to a 32Bit Local COM server, is there any registry entry for specifying the path to a 64Bit Local COM Server? If there is none, can we use LocalServer32 for 64Bit servers also? Note: In my knowledge there are only 2 registry entries - LocalServer and LocalServer32. According to MSDN the former is used for registering 16bit server and the latter to register a 32bit server. Thanks and Regards, Gamer

    Read the article

1 2 3  | Next Page >