Search Results

Search found 1466 results on 59 pages for 'sizeof'.

Page 44/59 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Context migration in CUDA.NET

    - by Vyacheslav
    I'm currently using CUDA.NET library by GASS. I need to initialize cuda arrays (actually cublas vectors, but it doesn't matters) in one CPU thread and use them in other CPU thread. But CUDA context which holding all initialized arrays and loaded functions, can be attached to only one CPU thread. There is mechanism called context migration API to detach context from one thread and attach it to another. But i don't how to properly use it in CUDA.NET. I tried something like this: class Program { private static float[] vector1, vector2; private static CUDA cuda; private static CUBLAS cublas; private static CUdeviceptr ptr; static void Main(string[] args) { cuda = new CUDA(false); cublas = new CUBLAS(cuda); cuda.Init(); cuda.CreateContext(0); AllocateVectors(); cuda.DetachContext(); CUcontext context = cuda.PopCurrentContext(); GetVectorFromDeviceAsync(context); } private static void AllocateVectors() { vector1 = new float[]{1f, 2f, 3f, 4f, 5f}; ptr = cublas.Allocate(vector1.Length, sizeof (float)); cublas.SetVector(vector1, ptr); vector2 = new float[5]; } private static void GetVectorFromDevice(object objContext) { CUcontext localContext = (CUcontext) objContext; cuda.PushCurrentContext(localContext); cuda.AttachContext(localContext); //change vector somehow vector1[0] = -1; //copy changed vector to device cublas.SetVector(vector1, ptr); cublas.GetVector(ptr, vector2); CUDADriver.cuCtxPopCurrent(ref localContext); } private static void GetVectorFromDeviceAsync(CUcontext cUcontext) { Thread thread = new Thread(GetVectorFromDevice); thread.IsBackground = false; thread.Start(cUcontext); } } But execution fails on attempt to copy changed vector to device because context is not attached? Any ideas how i can get it work?

    Read the article

  • CreateProcessAsUser doesn't draw the GUI

    - by pavel.tuzov
    Hi, I have a windows service running under "SYSTEM" account that checks if a specific application is running for each logged in user. If the application is not running, the service starts it (under corresponding user name). I'm trying to accomplish my goal using CreateProcessAsUser(). The service does start the application under corresponding user name, but the GUI is not drawn. (Yes, I'm making sure that "Allow service to interact with desktop" check box is enabled). System: XP SP3, labguage: C# Here is some code that might be of interest: PROCESS_INFORMATION processInfo = new PROCESS_INFORMATION(); startInfo.cb = Marshal.SizeOf(startInfo); startInfo.lpDesktop = "winsta0\\default"; bResult = Win32.CreateProcessAsUser(hToken, null, strCommand, IntPtr.Zero, IntPtr.Zero, false, 0, IntPtr.Zero, null, ref startInfo, out processInfo); As far as I understand, setting startInfo.lpDesktop = "winsta0\default"; should have used the desktop of corresponding user. Even contrary to what is stated here: http://support.microsoft.com/kb/165194, I tried setting lpDesktop to null, or not setting it at all, both giving the same result: process was started in the name of expected user and I could see a part of window's title bar. The "invisible" window intercepts mouse click events, handles them as expected. It just doesn't draw itself. Is anyone familiar with such a problem and knows what am I doing wrong?

    Read the article

  • [Gray Hat Python] Simple debugger, want work ??

    - by Rami Jarrar
    hi, i'm reading the Gray Hat Python,, i reach for this :: class debugger(): def __init__(self): self.h_process = None self.pid = None self.debugger_active = False def load(self,path_to_exe): creation_flags = DEBUG_PROCESS startupinfo = STARTUPINFO() process_information = PROCESS_INFORMATION() startupinfo.dwFlags = 0x1 startupinfo.wShowWindows = 0x0 startupinfo.cb = sizeof(startupinfo) if kernel32.CreateProcessA(path_to_exe, None, None, None, None, creation_flags, None, None, byref(startupinfo), byref(process_information)): print "[*] We have successfully launched the process!" print "[*] PID: %d"%(process_information.dwProcessId) self.h_process = self.open_process(process_information.dwProcessId) else: print "[*] Error: 0x%08x."%(kernel32.GetLastError()) def open_process(self,pid): h_process = self.open_process(pid) if kernel32.DebugActiveProcess(pid): self.debugger_active = True self.pid = int(pid) self.run() else: print "[*] Unable to attach to the process." def run(self): while self.debugger_active == True: self.get_debug_event() def get_debug_event(self): debug_event = DEBUG_EVENT() continue_status = DBG_CONTINUE if kernel32.WaitForDebugEvent(byref(debug_event), INFINITE): raw_input("Press a Key to continue...") self.debugger_active = False kernel32.ContinueDebugEvent( \ debug_event.dwProcessId, \ debug_event.dwThreadId, \ continue_status ) def detach(self): if kernel32.DebugActiveProcessStop(self.pid): print "[*] Finished debugging. Exiting..." return True else: print "There was an error" return False when run my_test.py :: import my_dbg debugger = my_dbg.debugger() pid = raw_input('Enter the PID of the process to attach to: ') debugger.open_process(int(pid)) debugger.detach() i get this error :: Traceback (most recent call last): File "C:/Python26/dbgpy/my_test.py", line 5, in <module> debugger.attach(int(pid)) File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) ........... ........... ........... File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) RuntimeError: maximum recursion depth exceeded its because the loop and something else, but what it is ?? I'm running on Windows using Python2.6.4.. :) Update:: i remove h_process = self.open_process(pid), but i get the same error for the next instruction if kernel32.DebugActiveProcess(pid) , so the problem i think in the loop while,, but what it is ???

    Read the article

  • Problems with Aero Glass in Delphi 7 applications

    - by Cralias
    Hi everyone! I'm trying to remake some of my older projects to support Aero Glass. Although it's kinda easy to enable glass frame, I've encountered some major problems. I used this code: var xVer: TOSVersionInfo; hDWM: THandle; DwmIsCompositionEnabled: function(pbEnabled: BOOL): HRESULT; stdcall; DwmExtendFrameIntoClientArea: function(hWnd: HWND; const pxMarInset: PRect): HRESULT; stdcall; bEnabled: BOOL; xFrame: TRect; // ... xVer.dwOSVersionInfoSize := SizeOf(TOSVersionInfo); GetVersionEx(xVer); if xVer.dwMajorVersion >= 6 then begin hDWM := LoadLibrary('dwmapi.dll'); @DwmIsCompositionEnabled := GetProcAddress(hDWM, 'DwmIsCompositionEnabled'); @DwmExtendFrameIntoClientArea := GetProcAddress(hDWM, 'DwmExtendFrameIntoClientArea'); if (@DwmIsCompositionEnabled <> nil) and (@DwmExtendFrameIntoClientArea <> nil) then begin DwmIsCompositionEnabled(@bEnabled); if bEnabled then begin xRect := Rect(-1, -1, -1, -1); DwmExtendFrameIntoClientArea(FrmMain.Handle, @xRect); end; end; FreeLibrary(hDWM); end; So I got the pretty glass window now. Due to black being transparent color now (kinda stupid choice, why couldn't it be pink) anything that is clBlack becomes transparent, too. It means all labels, edits, button texts... even if I set text to some other color at design time, DWM still makes them that color AND transparent. Well, my question would be - whether it's possible to solve this somehow?

    Read the article

  • C# - NetUseAdd from NetApi32.dll on Windows Server 2008 and IIS7

    - by Jack Ryan
    I am attemping to use NetUseAdd to add a share that is needed by an application. My code looks like this. [DllImport("NetApi32.dll", SetLastError = true, CharSet = CharSet.Unicode)] internal static extern uint NetUseAdd( string UncServerName, uint Level, IntPtr Buf, out uint ParmError); ... USE_INFO_2 info = new USE_INFO_2(); info.ui2_local = null; info.ui2_asg_type = 0xFFFFFFFF; info.ui2_remote = remoteUNC; info.ui2_username = username; info.ui2_password = Marshal.StringToHGlobalAuto(password); info.ui2_domainname = domainName; IntPtr buf = Marshal.AllocHGlobal(Marshal.SizeOf(info)); try { Marshal.StructureToPtr(info, buf, true); uint paramErrorIndex; uint returnCode = NetUseAdd(null, 2, buf, out paramErrorIndex); if (returnCode != 0) { throw new Win32Exception((int)returnCode); } } finally { Marshal.FreeHGlobal(buf); } This works fine on our server 2003 boxes. But in attempting to move over to Server 2008 and IIS7 this doesnt work any more. Through liberal logging i have found that it hangs on the line Marshal.StructureToPtr(info, buf, true); I have absolutely no idea why this is can anyone shed any light on it for tell me where i might look for more information?

    Read the article

  • C# CreatePipe() -> Protected memory error

    - by M. Dimitri
    Hi all, I trying to create a pipe using C#. The code is quite simple but I get a error saying "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Here the COMPLETE code of my form : public partial class Form1 : Form { [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern bool CreatePipe(out SafeFileHandle hReadPipe, out SafeFileHandle hWritePipe, SECURITY_ATTRIBUTES lpPipeAttributes, int nSize); [StructLayout(LayoutKind.Sequential)] public struct SECURITY_ATTRIBUTES { public DWORD nLength; public IntPtr lpSecurityDescriptor; public bool bInheritHandle; } public Form1() { InitializeComponent(); } private void btCreate_Click(object sender, EventArgs e) { SECURITY_ATTRIBUTES sa = new SECURITY_ATTRIBUTES(); sa.nLength = (DWORD)System.Runtime.InteropServices.Marshal.SizeOf(sa); sa.lpSecurityDescriptor = IntPtr.Zero; sa.bInheritHandle = true; SafeFileHandle hWrite = null; SafeFileHandle hRead = null; if (CreatePipe(out hRead, out hWrite, sa, 4096)) { MessageBox.Show("Pipe created !"); } else MessageBox.Show("Error : Pipe not created !"); } } At the top I declare : using DWORD = System.UInt32; Thank you very much if someone can help.

    Read the article

  • malloc: error checking and freeing memory

    - by yCalleecharan
    Hi, I'm using malloc to make an error check of whether memory can be allocated or not for the particular array z1. ARRAY_SIZE is a predefined with a numerical value. I use casting as I've read it's safe to do so. long double *z1 = (long double *)malloc(sizeof (long double) * ARRAY_SIZE); if(z1 == NULL){ printf("Out of memory\n"); exit(-1); } The above is just a snippet of my code, but when I add the error checking part (contained in the if statement above), I get a lot of compile time errors with visual studio 2008. It is this error checking part that's generating all the errors. What am I doing wrong? On a related issue with malloc, I understand that the memory needs to be deallocated/freed after the variable/array z1 has been used. For the array z1, I use: free(z1); z1 = NULL; Is the second line z1 = NULL necessary? Thanks a lot...

    Read the article

  • Using Freepascal\Lazarus JSON Libraries

    - by Gizmo_the_Great
    I'm hoping for a bit of a "simpletons" demo\explanation for using Lazarus\Freepascal JSON parsing. I've asked a question here but all the replies are "read this" and none of them are really helping me get a grasp because the examples are bit too in-depth and I'm seeking a very simple example to help me understand how it works. In brief, my program reads an untyped binary file in chunks of 4096 bytes. The raw data then gets converted to ASCII and stored in a string. It then goes through the variable looking for certain patterns, which, it turned out, are JSON data structures. I've currently coded the parsing the hard way using Pos and ExtractANSIString etc. But I'vesince learnt that there are JSON libraries for Lazarus & FPC, namely fcl-json, fpjson, jsonparser, jsonscanner etc. https://bitbucket.org/reiniero/fpctwit/src http://fossies.org/unix/misc/fpcbuild-2.6.0.tar.gz:a/fpcbuild-2.6.0/fpcsrc/packages/fcl-json/src/ http://svn.freepascal.org/cgi-bin/viewvc.cgi/trunk/packages/fcl-json/examples/ However, I still can't quite work out HOW I read my string variable and parse it for JSON data and then access those JSON structures. Can anyone give me a very simple example, to help getting me going? My code so far (without JSON) is something like this: try SourceFile.Position := 0; while TotalBytesRead < SourceFile.Size do begin BytesRead := SourceFile.Read(Buffer,sizeof(Buffer)); inc(TotalBytesRead, BytesRead); StringContent := StripNonAsciiExceptCRLF(Buffer); // A custom function to strip out binary garbage leaving just ASCII readable text if Pos('MySearchValue', StringContent) > 0 then begin // Do the parsing. This is where I need to do the JSON stuff ...

    Read the article

  • Print the first line of a file

    - by Pedro
    void cabclh(){ FILE *fp; char *val, aux; int i=0; char *result, cabeca[60]; fp=fopen("trabalho.txt","r"); if(fp==NULL){ printf("Erro ao abrir o ficheiro\n"); return ; } val=(char*)calloc(aux, sizeof(char)); while(fgetc(fp)=='\n'){ fgets(cabeca,60,fp); printf("%s\n",cabeca); } fclose(fp); free(fp); } void infos(){ FILE *fp; char info[100]; fp=fopen("trabalho.txt","r"); if(fp==NULL){ printf("Erro ao abrir o ficheiro\n"); } while(fgetc(fp)=='-'){ fgets(info,100,fp); printf("%s\n",info); } fclose(fp); } At cabclh i want that the program recognize that the first line is header..but this code doesn't print nothing At infos i want that he recognize that every lines that begin with '-' are info...

    Read the article

  • Objective-C classes, pointers to primitive types, etc.

    - by Toby Wilson
    I'll cut a really long story short and give an example of my problem. Given a class that has a pointer to a primitive type as a property: @interface ClassOne : NSObject { int* aNumber } @property int* aNumber; The class is instantiated, and aNumber is allocated and assigned a value, accordingly: ClassOne* bob = [[ClassOne alloc] init]; bob.aNumber = malloc(sizeof(int)); *bob.aNumber = 5; It is then passed, by reference, to assign the aNumber value of a seperate instance of this type of class, accordingly: ClassOne* fred = [[ClassOne alloc] init]; fred.aNumber = bob.aNumber; Fred's aNumber pointer is then freed, reallocated, and assigned a new value, for example 7. Now, the problem I'm having; Since Fred has been assigned the same pointer that Bob had, I would expect that Bob's aNumber will now have a value of 7. It doesn't, because for some reason it's pointer was freed, but not reassigned (it is still pointing to the same address it was first allocated which is now freed). Fred's pointer, however, has the allocated value 7 in a different memory location. Why is it behaving like this? What am I minsunderstanding? How can I make it work like C++ does?

    Read the article

  • c++ generate a good random seed for psudo random number generators

    - by posop
    I am trying to generate a good random seed for a psudo-random number generator. I thought I'd get the expert's opinions. let me know if this is a bad way of doing it or if there are much better ways. #include <iostream> #include <cstdlib> #include <fstream> #include <ctime> unsigned int good_seed() { unsigned int random_seed, random_seed_a, random_seed_b; std::ifstream file ("/dev/random", std::ios::binary); if (file.is_open()) { char * memblock; int size = sizeof(int); memblock = new char [size]; file.read (memblock, size); file.close(); random_seed_a = int(memblock); delete[] memblock; }// end if else { random_seed_a = 0; } random_seed_b = std::time(0); random_seed = random_seed_a xor random_seed_b; return random_seed; } // end good_seed()

    Read the article

  • How does one convert 16-bit RGB565 to 24-bit RGB888?

    - by jleedev
    I’ve got my hands on a 16-bit rgb565 image (specifically, an Android framebuffer dump), and I would like to convert it to 24-bit rgb888 for viewing on a normal monitor. The question is, how does one convert a 5- or 6-bit channel to 8 bits? The obvious answer is to shift it. I started out by writing this: uint16_t buf; while (read(0, &buf, sizeof buf)) { unsigned char red = (buf & 0xf800) >> 11; unsigned char green = (buf & 0x07c0) >> 5; unsigned char blue = buf & 0x003f; putchar(red << 3); putchar(green << 2); putchar(blue << 3); } However, this doesn’t have one property I would like, which is for 0xffff to map to 0xffffff, instead of 0xf8fcf8. I need to expand the value in some way, but I’m not sure how that should work. The Android SDK comes with a tool called ddms (Dalvik Debug Monitor) that takes screen captures. As far as I can tell from reading the code, it implements the same logic; yet its screenshots are coming out different, and white is mapping to white. Here’s the raw framebuffer, the smart conversion by ddms, and the dumb conversion by the above algorithm. (By the way, this conversion is implemented in ffmpeg, but it’s just performing the dumb conversion listed above, leaving the LSBs at all zero.) I guess I have two questions: What’s the most sensible way to convert rgb565 to rgb888? How is DDMS converting its screenshots?

    Read the article

  • Read binary file into a struct C#

    - by Robert Höglund
    I'm trying to read binary data using C#. I have all information about the layout of the data in the files I want to read. I'm able to read the data "chunk by chunk", i.e. getting the first 40 bytes of data converting it to a string, get the next 40 bytes, ... Since there are at least three slighlty different version of the data, I would like to read the data directly into a struct. It just feels so much more right than by reading it "line by line". I have tried the following approach but to no avail:StructType aStruct; int count = Marshal.SizeOf(typeof(StructType)); byte[] readBuffer = new byte[count]; BinaryReader reader = new BinaryReader(stream); readBuffer = reader.ReadBytes(count); GCHandle handle = GCHandle.Alloc(readBuffer, GCHandleType.Pinned); aStruct = (StructType) Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(StructType)); handle.Free(); The stream is an opened FileStream from which I have began to read from. I get an AccessViolationException when using Marshal.PtrToStructure. The stream contains more information than I'm trying to read since I'm not interested in data at the end of the file. The struct is defined like:[StructLayout(LayoutKind.Explicit)] struct StructType { [FieldOffset(0)] public string FileDate; [FieldOffset(8)] public string FileTime; [FieldOffset(16)] public int Id1; [FieldOffset(20)] public string Id2; } The examples code is changed from original to make this question shorter. How would I read binary data from a file into a struct?

    Read the article

  • Making pascal's triangle with mpz_t's

    - by SDLFunTimes
    Hey, I'm trying to convert a function I wrote to generate an array of longs that respresents Pascal's triangles into a function that returns an array of mpz_t's. However with the following code: mpz_t* make_triangle(int rows, int* count) { //compute triangle size using 1 + 2 + 3 + ... n = n(n + 1) / 2 *count = (rows * (rows + 1)) / 2; mpz_t* triangle = malloc((*count) * sizeof(mpz_t)); //fill in first two rows mpz_t one; mpz_init(one); mpz_set_si(one, 1); triangle[0] = one; triangle[1] = one; triangle[2] = one; int nums_to_fill = 1; int position = 3; int last_row_pos; int r, i; for(r = 3; r <= rows; r++) { //left most side triangle[position] = one; position++; //inner numbers mpz_t new_num; mpz_init(new_num); last_row_pos = ((r - 1) * (r - 2)) / 2; for(i = 0; i < nums_to_fill; i++) { mpz_add(new_num, triangle[last_row_pos + i], triangle[last_row_pos + i + 1]); triangle[position] = new_num; mpz_clear(new_num); position++; } nums_to_fill++; //right most side triangle[position] = one; position++; } return triangle; } I'm getting errors saying: incompatible types in assignment for all lines where a position in the triangle is being set (i.e.: triangle[position] = one;). Does anyone know what I might be doing wrong?

    Read the article

  • How bad is code using std::basic_string<t> as a contiguous buffer?

    - by BillyONeal
    I know technically the std::basic_string template is not required to have contiguous memory. However, I'm curious how many implementations exist for modern compilers that actually take advantage of this freedom. For example, if one wants code like the following it seems silly to allocate a vector just to turn around instantly and return it as a string: DWORD valueLength = 0; DWORD type; LONG errorCheck = RegQueryValueExW( hWin32, value.c_str(), NULL, &type, NULL, &valueLength); if (errorCheck != ERROR_SUCCESS) WindowsApiException::Throw(errorCheck); else if (valueLength == 0) return std::wstring(); std::wstring buffer; do { buffer.resize(valueLength/sizeof(wchar_t)); errorCheck = RegQueryValueExW( hWin32, value.c_str(), NULL, &type, &buffer[0], &valueLength); } while (errorCheck == ERROR_MORE_DATA); if (errorCheck != ERROR_SUCCESS) WindowsApiException::Throw(errorCheck); return buffer; I know code like this might slightly reduce portability because it implies that std::wstring is contiguous -- but I'm wondering just how unportable that makes this code. Put another way, how may compilers actually take advantage of the freedom having noncontiguous memory allows? Oh: And of course given what the code's doing this only matters for Windows compilers.

    Read the article

  • operator new for array of class without default constructor......

    - by skydoor
    For a class without default constructor, operator new and placement new can be used to declare an array of such class. When I read the code in More Effective C++, I found the code as below(I modified some part)..... My question is, why [] after the operator new is needed? I test it without it, it still works. Can any body explain that? class A { public: int i; A(int i):i(i) {} }; int main() { void *rawMemory = operator new[] (10 * sizeof(A)); // Why [] needed here? A *p = static_cast<A*>(rawMemory); for(int i = 0 ; i < 10 ; i++ ) { new(&p[i])A(i); } for(int i = 0 ; i < 10 ; i++ ) { cout<<p[i].i<<endl; } for(int i = 0 ; i < 10 ; i++ ) { p[i].~A(); } return 0; }

    Read the article

  • Odd difference between Python 2.5 and Python 2.6 on MacOS 10.6 using ctypes and libproc proc_pidinfo

    - by cemasoniv
    I'm trying to determine the current working directory of a process given its PID. The command-line utility lsof does something similar. Here's the source to the python script: import ctypes from ctypes import util import sys PROC_PIDVNODEPATHINFO = 9 proc = ctypes.cdll.LoadLibrary(util.find_library("libproc")) print(proc.proc_pidinfo) class vnode_info(ctypes.Structure): _fields_ = [('data', ctypes.c_ubyte * 152)] class vnode_info_path(ctypes.Structure): _fields_ = [('vip_vi', vnode_info), ('vip_path', ctypes.c_char * 1024)] class proc_vnodepathinfo(ctypes.Structure): _fields_ = [('pvi_cdir', vnode_info_path), ('pvi_rdir', vnode_info_path)] inst = proc_vnodepathinfo() pid = int(sys.argv[1]) ret = proc.proc_pidinfo( pid, PROC_PIDVNODEPATHINFO, 0, ctypes.byref(inst), ctypes.sizeof(inst) ) print(ret, inst.pvi_cdir.vip_path) However, even though this script behaves as expected on Python 2.6, it does not work in Python 2.5: host:dir user$ sudo /usr/bin/python2.6 script.py 2698 <_FuncPtr object at 0x100419ae0> (2352, '/') host:dir user$ sudo /usr/bin/python2.5 script.py 2698 <_FuncPtr object at 0x19fdc0> (0, '') (PID 2698 is "Activity Monitor.app"). Note the different return values. Since this program strongly based on ctypes, I can't imagine any difference in Python itself that would cause this. The same behavior (as Python 2.5) occurs with my self-built Python 3.2. I'm not sure what versioning information I can give to help track down the weirdness -- or even come up with a solution for 2.5 -- but here's some stuff: host:dir user$ otool -L /usr/bin/python2.6 /usr/bin/python2.6: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ otool -L /usr/bin/python2.5 /usr/bin/python2.5 (architecture i386): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) /usr/bin/python2.5 (architecture ppc7400): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ uname -a Darwin host.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 Thanks to anyone that has a clue about what's going on here:)

    Read the article

  • Using kAudioSessionProperty_OtherMixableAudioShouldDuck on iPhone

    - by Cliff
    Hello, I'm trying to get consistant behavior out of the kAudioSessionProperty_OtherMixableAudioShouldDuck property on the iPhone to allow iPod music blending and I'm having trouble. At the start of my app I set an Ambient category like so: -(void) initialize { [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryAmbient error: nil]; } Later on when I attempt to play audio I set the duck property using the following method: //this will crossfade the audio with the ipod music - (void) toggleCrossfadeOn:(UInt32)onOff { //crossfade the ipod music AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck,sizeof(onOff),&onOff); AudioSessionSetActive(onOff); } I call this passing a numeric "1" just before playing audio like so: [self toggleCrossfadeOn:1]; [player play]; I then call the crossfade method again passing a zero when my app's audio completes using a playback is stopping callback like so: -(void) playbackIsStoppingForPlayer:(MQAudioPlayer*)audioPlayer { NSLog(@"Releasing player"); [audioPlayer release]; [self toggleCrossfadeOn:0]; } In my production app this exact code works as expected, causing the ipod to fade out just before playing my app's audio then fade back in when the audio finishes playing. In a new project I recently started, I get different behavior. The iPod audio fades down and never fades back in. In my production app I use the AVAudioPlayer where in my new app I've written a custom audio player that uses audio queues. Could somebody help me understand the differences and how to properly use this API?

    Read the article

  • Passing char * into fopen with C.

    - by Rhys
    Hey there, I'm writing a program that passes data from a file into an array, but I'm having trouble with fopen (). It seems to work fine when I hardcode the file path into the parameters (eg fopen ("data/1.dat", "r");) but when I pass it as a pointer, it returns NULL. Note that line 142 will print "data/1.dat" if entered from command line so parse_args () appears to be working. 132 int 133 main(int argc, char **argv) 134 { 135 FILE *in_file; 136 int *nextItem = (int *) malloc (sizeof (int)); 137 set_t *dictionary; 138 139 /* Parse Arguments */ 140 clo_t *iopts = parse_args(argc, argv); 141 142 printf ("INPUT FILE: %s.\n", iopts->input_file); /* This prints correct path */ 143 /* Initialise dictionary */ 144 dictionary = set_create (SET_INITAL_SIZE); 145 146 /* Use fscanf to read all data values into new set_t */ 147 if ((in_file = fopen (iopts->input_file, "r")) == NULL) 148 { 149 printf ("File not found...\n"); 150 return 0; 151 } Thanks! Rhys

    Read the article

  • AVL tree in C language

    - by I_S_W
    Hey all; i am currently doing a project that requires the use of AVL trees , the insert function i wrote for the avl does not seem to be working , it works for 3 or 4 nodes at maximum ; i would really appreciate your help The attempt is below enter code here Tree insert(Tree t,char name[80],int num) { if(t==NULL) { t=(Tree)malloc(sizeof(struct node)); if(t!=NULL) { strcpy(t->name,name); t->num=num; t->left=NULL; t->right=NULL; t->height=0; } } else if(strcmp(name,t->name)<0) { t->left=insert(t->left,name,num); if((height(t->left)-height(t->right))==2) if(strcmp(name,t->left->name)<0) { t=s_rotate_left(t);} else{ t=d_rotate_left(t);} } else if(strcmp(name,t-name)0) { t-right=insert(t-right,name,num); if((height(t-right)-height(t-left))==2) if(strcmp(name,t-right-name)0){ t=s_rotate_right(t); } else{ t=d_rotate_right(t);} } t-height=max(height(t-left),height(t-right))+1; return t; }

    Read the article

  • Data loss when downloading data from LDAP server

    - by Ricky D'Amelio
    Hi there. This question comes from a previous one I asked about handling NSData objects: http://stackoverflow.com/questions/2453785/converting-nsdata-to-an-nsstring-representation-is-failing. I have reached the point where I am taking an NSImage, turning it into NSData and uploading those data bytes to the LDAP server. I am doing this like so; //connected successfully to LDAP server above... struct berval photo_berval; struct berval *jpegPhoto_values[2]; photo_berval.bv_len = [photo length]; photo_berval.bv_val = [photo bytes]; jpegPhoto_values[0] = &photo_berval; jpegPhoto_values[1] = NULL; mod.mod_type = "jpegPhoto"; mod.mod_op = LDAP_MOD_REPLACE|LDAP_MOD_BVALUES; mod.mod_bvalues = jpegPhoto_values; mods[0] = &mod; mods[1] = NULL; //perform the modify operation rc = ldap_modify_ext_s(ld, givenModifyEntry, mods, NULL, NULL); That happens with no errors, and you can see a big blob of data when you're in the command line. My problem is, when I go to access the same data at a later stage, I am getting an image file back that's about 120 times smaller than the original image. //find the jpegPhoto attribute photoA = ldap_first_attribute(ld, photoE, &photoBer); while (strcasecmp(photoA, "jpegphoto") != 0) { photoA = ldap_next_attribute(ld, photoE, photoBer); } //get the value of the attribute if ((list_of_photos = ldap_get_values_len(ld, photoE, photoA)) != NULL) { //get the first JPEG photo_data = *list_of_photos[0]; selectedPictureData = [NSData dataWithBytes:&photo_data length:sizeof(photo_data)]; [selectedPictureData writeToFile:@"/Users/username/Desktop/Photo 2.jpg" atomically:YES]; NSLog (@"%@", selectedPictureData); Has anyone successfully done this before or can anyone see what I might be doing wrong? I appreciate anyone's help. Sorry to post so many questions! Ricky.

    Read the article

  • I just don't get AudioFileReadPackets

    - by Eric Christensen
    I've tried to write the smallest chunk of code to narrow down a problem. It's now just a few lines and it doesn't work, which makes it pretty clear that I have a fundamental misunderstanding of how to use AudioFileReadPackets. I've read the docs and other examples online, and apparently I'm just not getting. Could you explain it to me? Here's what this block should do: I've previously opened a file. I want to read just one packet - the first one of the file - and then print it. But it crashes on the AudioFileReadPackets line: AudioFileID mAudioFile2; AudioFileOpenURL (audioFileURL, 0x01, 0, &mAudioFile2); UInt32 *audioData2 = (UInt32 *)malloc(sizeof(UInt32) * 1); AudioFileReadPackets(mAudioFile2, false, NULL, NULL, 0, (UInt32*)1, audioData2); NSLog(@"first packet:%i",audioData2[0]); (For clarity, I've stripped out all error handling.) It's the AFRP line that crashes out. (I understand that the third and fourth argument are useful, and in my "real" code, I use them, but they're not required, right? So NULL in this case should work, right?) So then what's going on? Any guidance would be much appreciated. Thanks.

    Read the article

  • ffmpeg(libavcodec). memory leaks in avcodec_encode_video

    - by gavlig
    I'm trying to transcode a video with help of libavcodec. On transcoding big video files(hour or more) i get huge memory leaks in avcodec_encode_video. I have tried to debug it, but with different video files different functions produce leaks, i have got a little bit confused about that :). [Here] (http://stackoverflow.com/questions/4201481/ffmpeg-with-qt-memory-leak) is the same issue that i have, but i have no idea how did that person solve it. QtFFmpegwrapper seems to do the same i do(or i missed something). my method is lower. I took care about aFrame and aPacket outside with av_free and av_free_packet. int Videocut::encode( AVStream *anOutputStream, AVFrame *aFrame, AVPacket *aPacket ) { AVCodecContext *outputCodec = anOutputStream->codec; if (!anOutputStream || !aFrame || !aPacket) { return 1; /* NOTREACHED */ } uint8_t * buffer = (uint8_t *)malloc( sizeof(uint8_t) * _DefaultEncodeBufferSize ); if (NULL == buffer) { return 2; /* NOTREACHED */ } int packetSize = avcodec_encode_video( outputCodec, buffer, _DefaultEncodeBufferSize, aFrame ); if (packetSize < 0) { free(buffer); return 1; /* NOTREACHED */ } aPacket->data = buffer; aPacket->size = packetSize; return 0; }

    Read the article

  • MPI hypercube broadcast error

    - by luvieere
    I've got a one to all broadcast method for a hypercube, written using MPI: one2allbcast(int n, int rank, void *data, int count, MPI_Datatype dtype) { MPI_Status status; int mask, partner; int mask2 = ((1 << n) - 1) ^ (1 << n-1); for (mask = (1 << n-1); mask; mask >>= 1, mask2 >>= 1) { if (rank & mask2 == 0) { partner = rank ^ mask; if (rank & mask) MPI_Recv(data, count, dtype, partner, 99, MPI_COMM_WORLD, &status); else MPI_Send(data, count, dtype, partner, 99, MPI_COMM_WORLD); } } } Upon calling it from main: int main( int argc, char **argv ) { int n, rank; MPI_Init (&argc, &argv); MPI_Comm_size (MPI_COMM_WORLD, &n); MPI_Comm_rank (MPI_COMM_WORLD, &rank); one2allbcast(floor(log(n) / log (2)), rank, "message", sizeof(message), MPI_CHAR); MPI_Finalize(); return 0; } compiling and executing on 8 nodes, I receive a series of errors reporting that processes 1, 3, 5, 7 were stopped before the point of receiving any data: MPI_Recv: process in local group is dead (rank 1, MPI_COMM_WORLD) Rank (1, MPI_COMM_WORLD): Call stack within LAM: Rank (1, MPI_COMM_WORLD): - MPI_Recv() Rank (1, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 3, MPI_COMM_WORLD) Rank (3, MPI_COMM_WORLD): Call stack within LAM: Rank (3, MPI_COMM_WORLD): - MPI_Recv() Rank (3, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 5, MPI_COMM_WORLD) Rank (5, MPI_COMM_WORLD): Call stack within LAM: Rank (5, MPI_COMM_WORLD): - MPI_Recv() Rank (5, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 7, MPI_COMM_WORLD) Rank (7, MPI_COMM_WORLD): Call stack within LAM: Rank (7, MPI_COMM_WORLD): - MPI_Recv() Rank (7, MPI_COMM_WORLD): - main() Where do I go wrong?

    Read the article

  • Removing padding from structure in kernel module

    - by dexkid
    I am compiling a kernel module, containing a structure of size 34, using the standard command. make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules The sizeof(some_structure) is coming as 36 instead of 34 i.e. the compiler is padding the structure. How do I remove this padding? Running make V=1 shows the gcc compiler options passed as make -I../inc -C /lib/modules/2.6.29.4-167.fc11.i686.PAE/build M=/home/vishal/20100426_eth_vishal/organised_eth/src modules make[1]: Entering directory `/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE' test -e include/linux/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/linux/autoconf.h or include/config/auto.conf are missing."; \ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions ; rm -f /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions/* make -f scripts/Makefile.build obj=/home/vishal/20100426_eth_vishal/organised_eth/src gcc -Wp,-MD,/home/vishal/20100426_eth_vishal/organised_eth/src/.eth_main.o.d -nostdinc -isystem /usr/lib/gcc/i586-redhat-linux/4.4.0/include -Iinclude -I/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE/arch/x86/include -include include/linux/autoconf.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Os -m32 -msoft-float -mregparm=3 -freg-struct-return -mpreferred-stack-boundary=2 -march=i686 -mtune=generic -Wa,-mtune=generic32 -ffreestanding -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -Iarch/x86/include/asm/mach-generic -Iarch/x86/include/asm/mach-default -Wframe-larger-than=1024 -fno-stack-protector -fno-omit-frame-pointer -fno-optimize-sibling-calls -g -pg -Wdeclaration-after-statement -Wno-pointer-sign -fwrapv -fno-dwarf2-cfi-asm -DTX_DESCRIPTOR_IN_SYSTEM_MEMORY -DRX_DESCRIPTOR_IN_SYSTEM_MEMORY -DTX_BUFFER_IN_SYSTEM_MEMORY -DRX_BUFFER_IN_SYSTEM_MEMORY -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -O0 -Wall -DT_ETH_1588_051 -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -DNETHERNET_INTERRUPTS -DETH_IEEE1588_TESTS -DSNAPTYPSEL_TMSTRENA_TEVENTENA_TESTS -DT_ETH_1588_140_147 -DLOW_DEBUG_PRINTS -DMEDIUM_DEBUG_PRINTS -DHIGH_DEBUG_PRINTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(eth_main)" -D"KBUILD_MODNAME=KBUILD_STR(conxt_eth)" -c -o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.c

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >