Search Results

Search found 59 results on 3 pages for '16bit'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • Random number from a range in a Bash Script

    - by Jack
    Hi, I need to generate a random port number between 2000-65000 from a shell script. The problem is $RANDOM is only a 16bit number, so im stuck! PORT=$(($RANDOM%63000+2001)) would work nicely if it wasn't for the size limitation. Does anyone have an example of how I can do this, maybe by extracting something from /dev/urandom and getting it within a range? Thanks.

    Read the article

  • Bioshock2 installer is not running under windows 7 64bit

    - by kaykay
    When I try to install the Bioshock2 game under windows 7 64-bit i get the following error- "The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher." I try to run it in compatibility mode (as XP and admin) I get the same error. I don't understand why it is giving me this error because if it is 32bit program then it should just run like other 32 bit application I have. And I am sure Bioshock2 is not a 16bit game. What can I do to run this program? It ran on Windows XP in the past.

    Read the article

  • Best way to keep a MS DOS application alive? (Virtual MS DOS)

    - by user11010
    Hi, I have a good old DOS application which is still required to run. Unfortunately the PC is dying and needs to be replaced. The PC was running Windows 98 and the software was executed in a MS Dos Command line. When we would order a new PC you would at least get a Windows XP which has the Win 2000 (NT) Kernel and is not DOS based. Now I need a strategy to be able to run a that DOS application on a new age PC. What I was thinking is to buy a standard office box and have a virtual PC or VMware setup which is running MS DOS and my DOS application? Would this be a way to go? Any concerns? What about 16Bit/32Bit… So any opinion, experiances or tip would be great …. Thanks

    Read the article

  • Remote Desktop not following display settings

    - by John
    I have my RDP client set up to use highest settings for connecting to another PC on my LAN, which has display settings 1280x1024x32bit. RDP is specifically set to use 32bit depth, but when I connect it drops to 16bit. The PC I connect to is (amongst other things) used to do some 3D graphics. I don't expect great performance, just to check it works... but it doesn't over RDP, the 3D app doesn't think the hardware is the same. Does RDP's integration with Windows mean it is providing some virtualised rendering system? Should I use something less 'clever' like VNC, to literally screen-grab the contents of the screen without altering the settings?

    Read the article

  • VNC application/terminal server

    - by sebastian nielsen
    Which software should I use, if I want to set up a linux VNC terminal server that works in this way: The VNC server should be able to accept up to X simultanous connections on the same port 5900. The VNC server should use 640x480 on 8 or 16bit color. When the VNC server receives the connection, it should start a new "session" for a user, and auto-launch a specific linux application for that user. If the application is killed, crashes, or is exited in any way, user should be disconnected (kicked) from server. If the user disconnect, the application should be killed in a "graceful way", that allows the application to cleanup. (There should be no way to "pick up" a old session) Any ideas?

    Read the article

  • Articles of x386 and later CPU based systems

    - by user32569
    Hi there. I know this is hard question, and possibly not to be answered here, but if there is some article, or more you know about, please post a link. About books, its sad but many great computer books cannot be bought in my country. So, you can find many articles online, which says how memory was mapped back in pre x386 CPU. How there was explicit holes ready for MMIO BIOS, Video BIOS, etc. How there was A20 line for allowing higher memory access etc. Problem is, time changed. Today BIOSes are many times larger, and pure x86 16bit mode is used for booting and ROM flashing only. OS ignore BIOS as they access everything using drivers. And I just want to know, how it works today. I know not so specific question, but I read OS dev wiki, many articles, but all refering to days before massive usage of pure 32bit CPUs.

    Read the article

  • Problems with both LightDM and GDM using DisplayLink USB monitor

    - by Austin
    When I use LightDM, it will auto-login to desktop just fine. The only problem is Compiz doesn't work, and menus don't work. I can't right-click the desktop, and I can't select program menus in the top bar (I.e clicking "File" does nothing). When I use GDM, I only get a blank blue screen and the mouse cursor. I can't Ctrl+Alt+Backspace to restart, but I can Ctrl+Alt+F1 and Ctrl+Alt+F7 to switch modes. I don't think it's auto-logging me in, but I'm not sure. It plays the login screen noise. Will update with more information when I get home! EDIT: Okay, so I did a fresh install, just to ensure I hadn't borked something playing in the console. I reconfigured my setup as I did before, with the same results. Here's what I followed. The only difference is that instead of setting "vga=normal nomodeset" I set "GRUB_GFXPAYLOAD_LINUX = text". Also I only have the DisplayLink monitor configured in my xorg.conf file. At this point I'm using the open radeon driver, although I used the proprietary ati driver before. I'm not sure if I'm having a problem with: - X configuration - Graphics driver - DisplayLink driver - Unity - LightDM - Compiz - Or something else The resolution of the monitor is 800x480, 16bit. I tried setting a larger virtual resolution of 1200x720 (because the real resolution is lower than the recommended resolution), but it causes Ubuntu to boot into low graphics mode. When I get home I'm going to install the fglrx driver and see if it enables virtual resolutions, which may further enable my window manager to function properly.

    Read the article

  • Pdssql.dll cannot be found

    - by Kolten
    I am attempting to open a Crystal Reports 8.5 document, and when I try to set the database to the Production data server, i get the error "Pdssql.dll cannot be found". Googling, this is a common problem, but none of the fixes I tried seem to work. This is a new computer. I do have SQL Server 2008 client tools installed, but I believe previously I had Sql Server 2005 client tools. I attempted to install the SQL Server 2005 client tools, but that didn't go through due to me having 2008 installed. I require 2008 to do my job now. Everything I search for says this is a 16bit driver, and I need to install the 2005 client tools. Unfortunately this cannot be done due to me having 2008. is there some sort of work-around I can do? Thanks

    Read the article

  • ffmpeg 0.5 flv to wav conversion creates wav files that other programs won't open.

    - by superrebel
    Hi, I am using the following command to convert FLV files to audio files to feed into julian, a speech to text program. cat ./jon2.flv | ffmpeg -i - -vn -acodec pcm_s16le -ar 16000 -ac 1 -f wav - | cat - > jon2.wav The cat's are there for debugging purposes as the final use will be a running program that will pipe FLV into ffmpeg's stdin and the stdout going to julian. The resulting wave files are identified by "file" as: jon3.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 16000 Hz VLC (based on ffmpeg) plays the file, but no other tools will open/see the data. They show empty wav files or won't open/play. For example Sound Booth from CS4. Has anyone else had similar problems? Julian requires wav files 16bit mono at 16000 Hz. Julian does seem to read the file, but doesn't seem to go through the entire file (may be unrelated). Thanks, -rr

    Read the article

  • Hidden features of x86 assembly

    - by Earlz
    I am still a fan of x86 assembly(sorta) and know a lot of developers still using x86 assembly, although by far there are very few features available in assembly, let us list out the most useful and not so well known ones. Of course the question is on the lines of the Hidden Features questions listed below.: Hidden Features of JavaScript Hidden Features of CSS Hidden Features of C# Hidden Features of VB.NET Hidden Features of Java Hidden Features of ASP.NET Hidden Features of Python Hidden Features of TextPad Hidden Features of Eclipse Hidden Features of Classic ASP Please specify one feature per answer. Also, you can specify all bits of the x86 such as 16bit(real mode), 32bit, and 64bit. Please keep it neutral of assembler though. Both Intel and AT&T syntax is welcome but please don't for example demonstrate a useful macro feature for yasm.

    Read the article

  • Large Scale VHDL modularization techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

  • Java playback of 24 bit audio is incorrect

    - by Paul Hampson
    I am using the javax sound API to implement a simple console playback program based on http://www.jsresources.org/examples/AudioPlayer.html. Having tested it using a 24 bit ramp file (each sample is the last sample plus 1 over the full 24 bit range) it is evident that something odd is happening during playback. The recorded output is not the contents of the file (I have a digital loopback to verify this). It seems to be misinterpreting the samples in some way that causes the left channel to look like it is having some gain applied to it and the right channel looks like it is being attenuated. I have looked into whether the PAN and BALANCE controls need setting but these aren't available and I have checked the windows xp sound system settings. Any other form of playback of this ramp file is fine. If I do the same test with a 16bit file it performs correctly with no corruption of the stream. So does anyone have any idea why the Java Sound API is modifying my audio stream?

    Read the article

  • OpenMP + SSE gives no speedup

    - by Sayan Ghosh
    Hi, My Professor found out this interesting experiment of 3D Linearly separable Kernel Convolution using SSE and OpenMP, and gave the task to me to benchmark the statistics on our system. The author claims a crazy 18 fold speedup from the serial approach! Might not be always, but we were expecting at least a 2-4 times speedup running this on a Dual Core Intel. http://software.intel.com/en-us/articles/16bit-3d-convolution-sse4openmp-implementation-on-penryn-cpu/#comment-41994 Alas, we could find exactly no speedup. The serial code performs always better, with or without OpenMP. I am using Linux, and observed a certain trend...when no other processes are running on the system, after a while the loadavg starts increasing, and the the %CPU utilization falls down. Another probable false positive which I ran into accidentally...I started the program, then immediately paused it. Then I ran it on background with bg, and saw a speedup of more than 2. This happens all the time! Any advice would be great. Thanks, Sayan

    Read the article

  • How come ftp protocol produces transmission errors sometimes if the data is using TCP, which is checksummed?

    - by Cray
    Every once in a while, downloading (especially large) files through ftp will produce errors. I am guessing that's also partly the reason why all major sites are publishing external checksums along with their downloads. How is this possible if ftp goes through TCP, which has checksum inbuilt and resends data if it is transmitted corruptly? One could argue that this is due to the short length of the CRC in the TCP protocol (which is 16bit I think, or something like that), and the collisions are simply happening too often. but 1) for this to be true, not only must there be a CRC collision, but also the random network error must modify both the CRC in the packet, and the packet itself so that the CRC will be valid for the new packet... Even with 16 bitCRC, is that so likely? 2) There are seemingly not many errors in, say, browsing the web which also goes through TCPIP.

    Read the article

  • Large Scale VHDL techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • What would you do if you coded a C++/OO cross-platform framework and realize its laying on your disk

    - by Manuel
    This project started as a development platform because i wanted to be able to write games for mobile devices, but also being able to run and debug the code on my desktop machine too (ie, the EPOC device emulator was so bad): the platforms it currently supports are: Window-desktop WinCE Symbian iPhone The architecture it's quite complete with 16bit 565 video framebuffer, blitters, basic raster ops, software pixel shaders, audio mixer with shaders (dsp fx), basic input, a simple virtual file system... although this thing is at it's first write and so there are places where some refactoring would be needed. Everything has been abstracted away and the guiding principle are: mostly clean code, as if it was a book to just be read object-orientation, without sacrifying performances mobile centric The idea was to open source it, but without being able to manage it, i doubt the software itself would benefit from this move.. Nevertheless, i myself have learned a lot from unmaintained projects. So, thanking you in advance for reading all this... really, what would you do?

    Read the article

  • Randomly generating sequence of ints in a specific range

    - by vvv
    Hi, I am unsure how to put this and my math skills aren't that strong. But here's what I need. I want to generate a list of all the 16bit integers (0-65535). But everytime I do so I want to seed the algorithm randomly that each time the list starts with a different integer and all the subsequent integers will be generated once but also in random order. small example (1-5): ... 1, 5, 3, 2, 4 4, 3, 1, 5, 2 2, 1, 4, 5, 3 ... Any help?

    Read the article

  • How a batch file runs on a remote machine started by PSEXEC

    - by user38780
    I am having an issue running a Batch file on a remote machine suing PSEXEC. The file runs but does not run like it does when run through remote desktop. The batch runs a file which is a 32 bit application, which opens multiple 16bit applications, this should all run under one ntvdm.exe (In one Memory Space). Through remote desktop the batch file runs under the explorer process, and works correctly opening only one ntvdm.exe. Using PSEXEC the batch runs but not under the explorer process, a separate ntvdm.exe is open for each process. I found running the batch from explorer in PSEXEC works, but comes up with a "File Download - Security Warning" eg. psexec.exe" \compname -u username -p passowrd -s -d -i 0 explorer C:\Program.bat I want to be able to run the batch successfully without receiving warnings, it is a local warning and not a network share warning. Possible to recreate warning typing "explorer C:\windows\system32\cmd.exe" in Run I would like to know if anyone knows of a way to get PSEXEC to open the batch file to run as though it was started by explorer. Or a way of removing the local "File Download - Security Warning" Thanks

    Read the article

  • Is it the address bus size or the data bus size that determines "8-bit , 16-bit ,32-bit ,64-bit " systems?

    - by learner
    My simple understanding is as follows. Memory (RAM) is composed of bits, groups of 8 which form bytes, each of which can be addressed ,and hence byte addressable memory. Address Bus stores the location of a byte of memory. If an address bus is of size 32 bits, that means it can hold upto 232 numbers and it hence can refer upto 232 bytes of memory = 4GB of memory and any memory greater than that is useless. Data bus is used to send the value to be written to/read off the memory. If I have a data bus of size 32 bits, it means a maximum of 4 bytes can be written to/read off the memory at a time. I find no relation between this size and the maximum memory size possible. But I read here that: Even though most systems are byte-addressable, it makes sense for the processor to move as much data around as possible. This is done by the data bus, and the size of the data bus is where the names 8-bit system, 16-bit system, 32-bit system, 64-bit system, etc.. come from. When the data bus is 8 bits wide, it can transfer 8 bits in a single memory operation. When the data bus is 32 bits wide (as is most common at the time of writing), at most, 32 bits can be moved in a single memory operation. This says that the size of the data bus is what gives an OS the name, 8bit, 16bit and so on. What is wrong with my understanding?

    Read the article

  • Distortion problem with Creative audio equalizer

    - by e-t172
    Hi, I have a problem with the Creative Console EQ, I don't know if it's fixable or not (is the EQ software or hardware on these cards?). Basically, I have enormous distortion with certain sounds in the 30 - 125 hz range. When this happens I get some sort of "frrzzzz" (sorry, I'm french and don't really know the correct english word for that) on top of the original sound. I have a Sound Blaster Audigy SE. I'm using the Daniel_K drivers, on Windows 7 Profesionnal x64. All the effects are disabled except EQ. Steps to reproduce Put the card in 24bit/96khz mode. The problem is also present with 16bit/48khz but seems to be less audible. In the Creative Console, use the following EQ: (full size) Play this sound at a reasonably high volume. You should hear distortion on the two "booms". Especially the second one. Disable Creative EQ. Play the sound in an application with an integrated EQ (e.g. foobar2000, ffdshow) using the same EQ parameters. There is no distortion. Conclusion: the Creative EQ is broken. Is anyone having the same problem? I'm also interested in the results with other Creative cards or even other brands soundcards with a similar EQ feature.

    Read the article

  • Best way to handle huge strings in c#

    - by srk
    I have to write the data below to a textfile after replacing two values with ##IP##, ##PORT##. what is the best way ? should i hold all in a string and use Replace and write to textfile ? Data : [APP] iVersion= 101 pcVersion=1.01a pcBuildDate=Mar 27 2009 [MAIN] iFirstSetup= 0 rcMain.rcLeft= 676 rcMain.rcTop= 378 rcMain.rcRight= 1004 rcMain.rcBottom= 672 iShowLog= 0 iMode= 1 [GENERAL] iTips= 1 iTrayAnimation= 1 iCheckColor= 1 iPriority= 1 iSsememcpy= 1 iAutoOpenRecv= 1 pcRecvPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\recv pcFileName=FantasyRemote iLanguage= 1 [SERVER] iAcceptVideo= 1 iAcceptAudio= 1 iAcceptInput= 1 iAutoAccept= 1 iAutoTray= 0 iConnectSound= 1 iEnablePassword= 0 pcPassword= pcPort=7902 [CLIENT] iAutoConnect= 0 pcPassword= pcDefaultPort=7902 [NETWORK] pcConnectAddr=##IP## pcPort=##Port## [VIDEO] iEnable= 1 pcFcc=AMV3 pcFccServer= pcDiscription= pcDiscriptionServer= iFps= 30 iMouse= 2 iHalfsize= 0 iCapturblt= 0 iShared= 0 iSharedTime= 5 iVsync= 1 iCodecSendState= 1 iCompress= 2 pcPlugin= iPluginScan= 0 iPluginAspectW= 16 iPluginAspectH= 9 iPluginMouse= 1 iActiveClient= 0 iDesktop1= 1 iDesktop2= 2 iDesktop3= 0 iDesktop4= 3 iScan= 1 iFixW= 16 iFixH= 8 [AUDIO] iEnable= 1 iFps= 30 iVolume= 6 iRecDevice= 0 iPlayDevice= 0 pcSamplesPerSec=44100Hz pcChannels=2ch:Stereo pcBitsPerSample=16bit iRecBuffNum= 150 iPlayBuffNum= 4 [INPUT] iEnable= 1 iFps= 30 iMoe= 0 iAtlTab= 1 [MENU] iAlwaysOnTop= 0 iWindowMode= 0 iFrameSize= 4 iSnap= 1 [HOTKEY] iEnable= 1 key_IDM_HELP=0x00000070 mod_IDM_HELP=0x00000000 key_IDM_ALWAYSONTOP=0x00000071 mod_IDM_ALWAYSONTOP=0x00000000 key_IDM_CONNECT=0x00000072 mod_IDM_CONNECT=0x00000000 key_IDM_DISCONNECT=0x00000073 mod_IDM_DISCONNECT=0x00000000 key_IDM_CONFIG=0x00000000 mod_IDM_CONFIG=0x00000000 key_IDM_CODEC_SELECT=0x00000000 mod_IDM_CODEC_SELECT=0x00000000 key_IDM_CODEC_CONFIG=0x00000000 mod_IDM_CODEC_CONFIG=0x00000000 key_IDM_SIZE_50=0x00000074 mod_IDM_SIZE_50=0x00000000 key_IDM_SIZE_100=0x00000075 mod_IDM_SIZE_100=0x00000000 key_IDM_SIZE_200=0x00000076 mod_IDM_SIZE_200=0x00000000 key_IDM_SIZE_300=0x00000000 mod_IDM_SIZE_300=0x00000000 key_IDM_SIZE_400=0x00000000 mod_IDM_SIZE_400=0x00000000 key_IDM_CAPTUREWINDOW=0x00000077 mod_IDM_CAPTUREWINDOW=0x00000004 key_IDM_REGION=0x00000077 mod_IDM_REGION=0x00000000 key_IDM_DESKTOP1=0x00000078 mod_IDM_DESKTOP1=0x00000000 key_IDM_ACTIVE_MENU=0x00000079 mod_IDM_ACTIVE_MENU=0x00000000 key_IDM_PLUGIN=0x0000007A mod_IDM_PLUGIN=0x00000000 key_IDM_PLUGIN_SCAN=0x00000000 mod_IDM_PLUGIN_SCAN=0x00000000 key_IDM_DESKTOP2=0x00000078 mod_IDM_DESKTOP2=0x00000004 key_IDM_DESKTOP3=0x00000079 mod_IDM_DESKTOP3=0x00000004 key_IDM_DESKTOP4=0x0000007A mod_IDM_DESKTOP4=0x00000004 key_IDM_WINDOW_NORMAL=0x0000000D mod_IDM_WINDOW_NORMAL=0x00000004 key_IDM_WINDOW_NOFRAME=0x0000000D mod_IDM_WINDOW_NOFRAME=0x00000002 key_IDM_WINDOW_FULLSCREEN=0x0000000D mod_IDM_WINDOW_FULLSCREEN=0x00000001 key_IDM_MINIMIZE=0x00000000 mod_IDM_MINIMIZE=0x00000000 key_IDM_MAXIMIZE=0x00000000 mod_IDM_MAXIMIZE=0x00000000 key_IDM_REC_START=0x00000000 mod_IDM_REC_START=0x00000000 key_IDM_REC_STOP=0x00000000 mod_IDM_REC_STOP=0x00000000 key_IDM_SCREENSHOT=0x0000002C mod_IDM_SCREENSHOT=0x00000002 key_IDM_AUDIO_MUTE=0x00000073 mod_IDM_AUDIO_MUTE=0x00000004 key_IDM_AUDIO_VOLUME_DOWN=0x00000074 mod_IDM_AUDIO_VOLUME_DOWN=0x00000004 key_IDM_AUDIO_VOLUME_UP=0x00000075 mod_IDM_AUDIO_VOLUME_UP=0x00000004 key_IDM_CTRLALTDEL=0x00000023 mod_IDM_CTRLALTDEL=0x00000003 key_IDM_QUIT=0x00000000 mod_IDM_QUIT=0x00000000 key_IDM_MENU=0x0000007B mod_IDM_MENU=0x00000000 [OVERLAY] iIndicator= 1 iAlphaBlt= 1 iEnterHide= 0 pcFont=MS UI Gothic [AVI] iSound= 1 iFileSizeLimit= 100000 iPool= 4 iBuffSize= 32 iStartDiskSpaceCheck= 1 iStartDiskSpace= 1000 iRecDiskSpaceCheck= 1 iRecDiskSpace= 100 iCache= 0 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\avi [SCREENSHOT] iSound= 1 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\ss pcPlugin=BMP [CDLG_SERVER] mrcWnd.rcLeft= 667 mrcWnd.rcTop= 415 mrcWnd.rcRight= 1013 mrcWnd.rcBottom= 634 [CWND_CLIENT] miShowLog= 0 m_iOverlayLock= 0 [CDLG_CONFIG] mrcWnd.rcLeft= 467 mrcWnd.rcTop= 247 mrcWnd.rcRight= 1213 mrcWnd.rcBottom= 802 miTabConfigSel= 2

    Read the article

  • Getting PCM values of WAV files

    - by user2431088
    I have a .wav mono file (16bit,44.1kHz) and im using this code below. If im not wrong, this would give me an output of values between -1 and 1 which i can apply FFT on ( to be converted to a spectrogram later on). However, my output is no where near -1 and 1. This is a portion of my output 7.01214599609375 17750.2552337646 8308.42733764648 0.000274658203125 1.00001525878906 0.67291259765625 1.3458251953125 16.0000305175781 24932 758.380676269531 0.0001068115234375 This is the code which i got from another post Edit 1: public static Double[] prepare(String wavePath, out int SampleRate) { Double[] data; byte[] wave; byte[] sR = new byte[4]; System.IO.FileStream WaveFile = System.IO.File.OpenRead(wavePath); wave = new byte[WaveFile.Length]; data = new Double[(wave.Length - 44) / 4];//shifting the headers out of the PCM data; WaveFile.Read(wave, 0, Convert.ToInt32(WaveFile.Length));//read the wave file into the wave variable /***********Converting and PCM accounting***************/ for (int i = 0; i < data.Length; i += 2) { data[i] = BitConverter.ToInt16(wave, i) / 32768.0; } /**************assigning sample rate**********************/ for (int i = 24; i < 28; i++) { sR[i - 24] = wave[i]; } SampleRate = BitConverter.ToInt16(sR, 0); return data; }

    Read the article

  • Question regarding ip checksum code

    - by looktt
    unsigned short /* this function generates header checksums */ csum (unsigned short *buf, int nwords) { unsigned long sum; for (sum = 0; nwords > 0; nwords--) // add words(16bits) together sum += *buf++; sum = (sum >> 16) + (sum & 0xffff); //add carry over sum += (sum >> 16); //what does this step do??? add possible left-over //byte? But isn't it already added in the loop (if //any)? return ((unsigned short) ~sum); } I assume nwords in the number of 16bits word, not 8bits byte (if there are odd byte, nword is rounded to next large), is it correct? The line sum = (sum 16) + (sum & 0xffff) is to add carry over to make 16bit complement sum += (sum 16); What's the purpose of this step? Add left-over byte? How? Thanks!

    Read the article

  • OpenGL pixels drawn with each horizontal pair swapped

    - by Tim Kane
    I'm somewhat new to OpenGL though I'm fairly sure my problem lies in the pixel format being used, or how my texture is being generated... I'm drawing a texture onto a flat 2D quad using a 16bit RGB5_A1 pixel format, though I don't make use of any alpha at this stage. The problem I'm having is that each pair of horizontal pixel values have been swapped. That is... if the pixels positions should be in this order (assume 8x2 image) 0 1 2 3 4 5 6 7 they are instead drawn as 1 0 3 2 5 4 7 6 Or, more clearly from this image (below). Left is what I get... Right is what I should get. . The question is... How have I ended up with this? Is there something wrong with the pixel format? Unlikely since the colours all appear correct, and I would expect all kinds of nasty if it were down to endian-ness. Suggestions greatly appreciated. Update: Turns out the problem was in my source renderer. Interestingly, I've avoided the problem entirely by using 32-bit textures (haven't tried 24-bit at this point).

    Read the article

< Previous Page | 1 2 3  | Next Page >