Search Results

Search found 33496 results on 1340 pages for '32 vs 64 bit'.

Page 81/1340 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • How to modify grub entry for supporting KGDB kernel image?

    - by Nishant
    I am trying to update target m/c grub.cfg file for KGDB setup but while booting the m/c it got hung completely and not asking/waiting for remote gdb connection. Following is the entry which I added:- menuentry 'Ubuntu, with Linux 2.6.32-24-kgdb' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set 12878c3b-c553-4b4b-986a-6e32daea3ad1 linux /vmlinuz-2.6.32-kgdb root=/dev/mapper/ubuntu-root ro kgdbwait [email protected]/,@192.168.140.158/ quiet initrd /initrd.img-2.6.32-24-server } I have also compiled and copied /boot/vmlinuz-2.6.15.5-kgdb & /boot/System.map-2.6.15.5-kgdb to target m/c from devlopement m/c. STD entry before adding KGDB in grub.cfg was:- menuentry 'Ubuntu, with Linux 2.6.32-24-server' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set 12878c3b-c553-4b4b-986a-6e32daea3ad1 linux /vmlinuz-2.6.32-24-server root=/dev/mapper/ubuntu-root ro quiet initrd /initrd.img-2.6.32-24-server } Please suggest how to get rid of this problem.

    Read the article

  • Ubuntu Linux won't display netbook's native resolution

    - by Daniel
    FYI: My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is the OS doesn't seem to recognise the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as the Unity Launcher is visible enough. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, there appears a black band at the top of the screen and the desktop area is lowered, with bits of it hidden beneath the screen. I tried leaving it at this new resolution and restarting the system to see if the black band would disappear & the display will fit correctly, but it gets reset to 1024x768 at startup and displays following error: Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1)

    Read the article

  • Sendmail on ubuntu 12.04 64 bit connection times out?

    - by adam
    Okay i get the following error message: to=<[email protected]>, ctladdr=<www-data@adam-linux> (33/33), delay=2+08:20:35, xdelay=00:00:00, mailer=esmtp, pri=25590437, relay=adamziolkowski.com., dsn=4.0.0, stat=Deferred: Connection timed out with adamziolkowski.com. I'm guessing to make sendmail work. I have to change the default outgoing port number to 465 because comcast blocks port 25. Any ideas? What could be causing this error?

    Read the article

  • Database Mail and SMO are indeed supported on 64-bit, Standard Edition instances of SQL Server 2012

    - by Argenis
      This is something that comes up rather regularly at forums, so I decided to create a quick post to make sure that folks out there can feel better about SQL Server 2012. If you read this Web article, “Features Supported By Editions of SQL Server 2012” as of time of writing this post, you will see that the article points out that these two features are not supported on x64 Standard Edition. This is NOT correct. It is most definitely a documentation bug – one that unfortunately has caused some customers to sit on a waiting pattern before upgrading to SQL Server 2012. Database Mail and SMO indeed work and are fully supported on SQL Server 2012 Standard Edition x64 instances. These features work as they should. I have contacted the documentation teams internally to make sure that this is reflected on next releases of said Web article.

    Read the article

  • Bit copy of encrypted home and other partitions

    - by Mka
    My laptop is overheating so I need to save all my files before I format the hard drive. I learned how to copy dev/sdX using dd command. However, I am not sure what to copy. Picture from GParted here: http://is.muni.cz/www/256590/fig.png should I copy sda5 and sda6 only? Or sda2 and sda1? I do not need to use these data on another disk, I just want to be able to access them - therefore I want to put them on external hard drive. And last question - how I will then mount my encrypted home? Will it work? Thanks a lot!

    Read the article

  • What is the best method to start understanding BIG project source code? [closed]

    - by Mr.32
    Possible Duplicate: How do you dive into large code bases? Sometimes before developing new products we need to understand some existing products or existing source code. Sometimes to write another small module of that big project we need to understand that big source code. In our case we need to study and understand a project with lots of files and folders. What is the easiest and most comfortable way to do it ? (especially for C and C++ and under Linux)

    Read the article

  • Wine 1.4. Cannot install vcrun6 on Ubuntu Studio 12.04.1 64 bit

    - by ABOBA
    Cannot install vcrun6. I tried to do it with winetricks and manually (download vcredist.exe and install), but nothing. Launching in terminal gives the following _user@_user-machine:~$ WINEPREFIX="/home/_user/.wine" wine "C:/vcredist.exe" fixme:setupapi:SetupDefaultQueueCallbackW notification 262144 params 32f63c,0 err:setupapi:SetupDefaultQueueCallbackW copy error 0 L"C:\\users\\_user\\Temp\\IXP000.TMP\\comcat.dll" -> L"C:\\windows\\system32\\comcat.dll" fixme:setupapi:SetupDefaultQueueCallbackW notification 262144 params 32f63c,0 err:setupapi:SetupDefaultQueueCallbackW copy error 0 L"C:\\users\\_user\\Temp\\IXP000.TMP\\msvcrt.dll" -> L"C:\\windows\\system32\\msvcrt.dll" fixme:setupapi:SetupDefaultQueueCallbackW notification 262144 params 32f63c,0 err:setupapi:SetupDefaultQueueCallbackW copy error 0 L"C:\\users\\_user\\Temp\\IXP000.TMP\\oleaut32.dll" -> L"C:\\windows\\system32\\oleaut32.dll" fixme:setupapi:SetupDefaultQueueCallbackW notification 262144 params 32f63c,0 err:setupapi:SetupDefaultQueueCallbackW copy error 0 L"C:\\users\\_user\\Temp\\IXP000.TMP\\olepro32.dll" -> L"C:\\windows\\system32\\olepro32.dll" fixme:setupapi:SetupDefaultQueueCallbackW notification 262144 params 32f63c,0 err:setupapi:SetupDefaultQueueCallbackW copy error 0 L"C:\\users\\_user\\Temp\\IXP000.TMP\\stdole2.tlb" -> L"C:\\windows\\system32\\stdole2.tlb" The distribution is Ubuntu Studio 12.04.1 64bit Thanks in advance

    Read the article

  • Speed up executable program Linux. Bit Toggling

    - by AK_47
    I have a ZyBo circuit board which has a ArmV7 processor. I wrote a C program to output a clock and a corresponding data sequence on a PMOD. The PMOD has a switching speed of up to 50MHz. However, my program's created clock only has a max frequency of 115 Hz. I need this program to output as fast as possible because the PMOD I'm using is capable of 50MHz. I compiled my program with the following code line: gcc -ofast (c_program) Here is some sample code: #include <stdio.h> #include <stdlib.h> #define ARRAYSIZE 511 //________________________________________ //macro for the SIGNAL PMOD //________________________________________ //DATA //ZYBO Use Pin JE1 #define INIT_SIGNAL system("echo 54 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio54/direction"); #define SIGNAL_ON system("echo 1 > /sys/class/gpio/gpio54/value"); #define SIGNAL_OFF system("echo 0 > /sys/class/gpio/gpio54/value"); //________________________________________ //macro for the "CLOCK" PMOD //________________________________________ //CLOCK //ZYBO Use Pin JE4 #define INIT_MYCLOCK system("echo 57 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio57/direction"); #define MYCLOCK_ON system("echo 1 > /sys/class/gpio/gpio57/value"); #define MYCLOCK_OFF system("echo 0 > /sys/class/gpio/gpio57/value"); int main(void){ int myarray[ARRAYSIZE] = {//hard coded array for signal data 1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,0,0,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,1,1,0,0,1,0,0,0,1,0,1,0,0,1,1,1,0,0,1,0,1,0,1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,1,0,0,1,0,1,0,0,1,1,1,1,1,1,0,0,1,0,0,1,1,0,1,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,1,0,0,0,1,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,1,1,1,0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,0,1,0,0,0,1,0,0,0,1,0,1,0,1,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,1,0,1,1,1,1,1,0,0,1,1,1,0,0,1,1,0,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,1,0,1,0,1,0,1,1,0,1,0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,1,1,0,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,1,1,1,0,1,1,1,0,0,1,1,1,0,1,0,0,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,0,1,1,0,0,1,0,1,1,0,1,0,1,1,1,0,0,0,0,0,1,0,0,0,1,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,0,1,1,1,1,1,1,1,1,0,1,1,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,1,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,1,0,0,1,0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,1,1,1,1,0,0,1,1,0,1,1,0,0,1,1,0,1,1,0,1,0,1,0,1,0,1,0,0,1,1,1,0,1,1,0,0,0,0,1,1,0,1,1,0,1,1,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0 }; INIT_SIGNAL INIT_MYCLOCK; //infinite loop int i; do{ i = 0; do{ /* 1020 is chosen because it is twice the size needed allowing for the changes in the clock. (511= 0-510, 510*2= 1020 ==> 0-1020 needed, so 1021 it is) */ if((i%2)==0) { MYCLOCK_ON; if(myarray[i/2] == 1){ SIGNAL_ON; }else{ SIGNAL_OFF; } } else if((i%2)==1) { MYCLOCK_OFF; //dont need to change the signal since it will just stay at whatever it was. } ++i; } while(i < 1021); } while(1); return 0; } I'm using the 'system' call to tell the system to output 1 volt or 0 volts onto a pin on the board (to represent the data signal and clock signal. One pin for the data and another for the clock). That was the only way I knew to tell the system to output a voltage. What can I do to make my executable program output to be at least in the magnitude of MegaHertz?

    Read the article

  • Dual boot UEFI Windows 7 and Ubuntu 12.04 (both 64 bits). W7 entry doesn't appear in GRUB

    - by Joe
    After trying to install both OS during 2 days, I'm confused and getting mad... I have SSD 128 GB and HDD 500 GB both empty. My laptop is Asus K55VM. BIOS support UEFI. What I have done: Install new SSD (Samsung 830 128GB) Use GParted on liveCD to create new table of partitions (GPT) and create 3 partitions (in the SSD) for different purposes: Partition 1: 80 GB (w7); Partition 2: 30 GB (Ubuntu 12.04 -Just / -); Partition 3: 10 GB unused (for future extesion of the other partitions) Install Windows 7 (with UEFI) in Partition 1. This create: /dev/sda1 - 100 MB for System (UEFI boot I guess) - FAT32 /dev/sda2 - 100 MB aprox. for MSR /dev/sda3 - 79.800 MB aprox. for Windows7 data In this point everything works fine. I have W7. Now I install Ubuntu 12.04 amd64 (with UEFI) as follows: Install / in Partition 2 - /dev/sda4 30 GB ext4, and in the hdd I install /home and swap. I select bootloader in /dev/sda1 (where it's supposed to be the UEFI boot). I install updates and reboot. Problem: Now just appears grub menu with Ubuntu entries and not Windows 7. Alternative solution found: When I turn on laptop, before loading GRUB I press ESC key and appear BIOS boot, so I can select to boot the Windows partition, Ubuntu partition, DVD, USB, etc... but I think is not the best way to boot different OS. I've tried: sudo update-grub2 with no success. What can I do??

    Read the article

  • Running external commands improved a bit

    - by Tomas Mysik
    Hi all, today we would like to show you one small improvement related to running external commands (e.g. generating documentation, running framework commands etc.) which will be available in NetBeans 7.3. First, have a look at the screenshot: As you can see, the first line represents the command that is being executed. In case of any error, this command can be easily copy & pasted to the console for deeper investigation (and proper bug reports ;). Also please notice that the Output window now supports background colors. That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans Bugzilla (component php, subcomponent Code).

    Read the article

  • Vista missing from grub bootlist after installing ubuntu

    - by tacomensa
    I installed Ubuntu on a logical partition a while ago. When I get to the grub bootlist, Vista is not there. What i get is this: Ubuntu, with linux 2.6.32-26 Ubuntu, with linux 2.6.32-26 (Recovery mode) Ubuntu, with linux 2.6.32-25 Ubuntu, with linux 2.6.32-26 (Recovery mode) Ubuntu, with linux 2.6.32-24 Ubuntu, with linux 2.6.32-26 (Recovery mode) Memory test (memtest86+) Windows vista (loader) (on/dev/sda1) windows recovery environment (loader) (on/dev/sda2) "Windows vista (loader)" is an acer erecovery manager Im guessing that grub installed on my primary partition so it overwrite the vista MBR and i dont have the option to boot vista. Is there some way i can just edit the MBR and add vista to it or how will i have to repair this? here is my boot script http://pastebin.com/7HZFjBT7

    Read the article

  • A driver (service) for this device has been disabled. Is how the code 32 starts off:

    - by E S
    A driver (service) for this device has been disabled. An alternate driver may be providing this functionality. (Code 32) No drive letter show in device manager, and the dvd/cd is now not useable because it is not seen. This all happened, when i starting using a new, external usb hard drive from Buffalo. I have win 7 64bit. Everything else looks to be working fine. I even out of desperation, tried to hook up, and external dvd that had worked fine in the past. Just too slow and ate up memory, so i never used it. It tries to use the same drives, and when you click to update drivers, it says this is the best one. HELP.... even if i wanted, (WHICH I DON'T), to use the factory win 7 re-installation dvd, how,lol. No drive to install it from in this situation. I am at a lose here, and Buffalo tec was of no help at all. Just said he could not help. Any help would be much appreciated.

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • Java bitshift strangeness

    - by Martin
    Java has 2 bitshift operators for right shifts: >> shifts right, and is dependant on the sign bit for the sign of the result >>> shifts right and shifts a zero into leftmost bits http://java.sun.com/docs/books/tutorial/java/nutsandbolts/op3.html This seems fairly simple, so can anyone explain to me why this code, when given a value of -128 for bar, produces a value of -2 for foo: byte foo = (byte)((bar & ((byte)-64)) >>> 6); What this is meant to do is take an 8bit byte, mask of the leftmost 2 bits, and shift them into the rightmost 2 bits. Ie: initial = 0b10000000 (-128) -64 = 0b11000000 initial & -64 = 0b10000000 0b10000000 >>> 6 = 0b00000010 The result actually is -2, which is 0b11111110 Ie. 1s rather than zeros are shifted into left positions

    Read the article

  • 16 bit processor , memory addressing and memory cells

    - by Zia ur Rahman
    Suppose the accumulater register of the processor is of 16 bit , now we can call this processor as 16 bit processor, that is this processor supports 16 bit addressing. now my question is how we can calculate the number of memory cells that can be addressed by 16 bit addressing? according to my calculation 2 to the power 16 becomes 65055 it means the memory have 65055 cells now if we take 1KB=1000 Bytes then this becomes 65055/1000=65.055 now this means that 65 kilo bytes memory can be used with the processor having 16 bit addressing. now if we take 1KB=1024 Bytes then this becomes 65055/1024=63.5 ,it means that 63 kilo bytes memory can be used with this processor, but people say that 64 kilo bytes memory can be used. Now tell me am i right or wrong and why i am wrong why people say that 64kb memory can be used with the processor having 16 bit addressing?

    Read the article

  • Why is a 16-bit register used with BSR instruction in this code snippet?

    - by sharptooth
    In this hardcore article there's a function find_maskwidth() that basically detects the number of bits required to represent itemCount dictinct values: unsigned int find_maskwidth( unsigned int itemCount ) { unsigned int maskWidth, count = itemCount; __asm { mov eax, count mov ecx, 0 mov maskWidth, ecx dec eax bsr cx, ax jz next inc cx mov maskWidth, ecx next: } return maskWidth; } the question is why do they use ax and cx registers instead of eax and ecx?

    Read the article

  • Negative logical shift

    - by user320862
    In Java, why does -32 -1 = 1 ? It's not specific to just -32. It works for all negative numbers as long as they're not too big. I've found that x -1 = 1 x -2 = 3 x -3 = 7 x -4 = 15 given 0 x some large negative number Isn't -1 the same as << 1? But -32 << 1 = -64. I've read up on two's complements, but still don't understand the reasoning.

    Read the article

  • Python proper use of __str__ and __repr__

    - by Peter
    Hey, My current project requires extensive use of bit fields. I found a simple, functional recipe for bit a field class but it was lacking a few features I needed, so I decided to extend it. I've just got to implementing __str__ and __repr__ and I want to make sure I'm following convention. __str__ is supposed to be informal and concice, so I've made it return the bit field's decimal value (i.e. str(bit field 11) would be "3". __repr__ is supposed to be a official representation of the object, so I've made it return the actual bit string (i.e. repr(bit field 11) would be "11"). In your opinion would this implementation meet the conventions for str and repr? Additionally, I have used the bin() function to get the bit string of the value stored in the class. This isn't compatible with Python < 2.6, is there an alternative method? Cheers, Pete

    Read the article

  • Hot to make COM ActiveX object work in IE 64 bit?

    - by Kurtevich
    Hi! I have a COM object embeded in ASP.NET page using <object classid="clsid:XXX...">. It works in IE 32 bit, but does not work in IE 64 bit - can't access its functions. There are no error messages, no event logs where I can get some information. The dll is in C#, includes COM visible class, compiled for Any CPU (though I also tried x86), and registered during client installation by executing regasm. This creates registry keys, well everything works fine except for IE 64. I searched internet about the issue or at least some guidlines and didn't find anything. I received an answer on another forum, something about _MERGE_PROXYSTUB (I guess it's preprocessor definition?) and ProxyStubClsid32 registry key, but not very detailed. Well, I searched again, didn't find much, and experimented: rebuilt with _MERGE_PROXYSTUB defined, created ProxyStubClsid32 keys everywhere, but with no result. What can be at least possible solutions or points to look at? Maybe there is a way at least to get the logs about why IE 64 can't access it?

    Read the article

  • Why is OpenSubKey() returning null on my Win 7 64 bit system?

    - by BrMcMullin
    Has anyone seen OpenSubKey() and other Microsoft.Win32 registry functions return null on 64 bit systems when 32 bit registry keys are under Wow6432node in the registry? I'm working on a unit testing framework that makes a call to OpenSubKey() from the .net library. My dev system is a Win 7 64 bit environment with VS 2008 SP1 and the Win 7 SDK installed. The application we're unit testing is a 32 bit application, so the registry is virtualized under HKLM\Software\Wow6432node. When we call: Registry.LocalMachine.OpenSubKey( @"Software\MyCompany\MyApp\" ); Null is returned, however explicitly stating to look here works: Registry.LocalMachine.OpenSubKey( @"Software\Wow6432node\MyCompany\MyApp\" ); From what I understand this function should be agnostic to 32 bit or 64 bit environments and should know to jump to the virtual node. Even stranger is the fact that the exact same call inside a compiled and installed version of our application is running just fine on the same system and is getting the registry keys necessary to run; which are also being placed in HKLM\Software\Wow6432node. Any suggestions? Thanks in advance!

    Read the article

  • Continuous Integration with 64-bit Sharepoint and TFS 2008?

    - by Hirvox
    I've set up a 64-bit TFS 2008 build server with Sharepoint, continuous integration and out-of-the-box MSTest. Unit tests for plain business logic classes run just fine and test results are published into TFS. However, any test that uses Sharepoint's API fails horribly, SPFarm.Local returning null and so on. Is there a way to fix this? The tests run fine in an otherwise identical 32-bit development environment (Windows Server 2008 under Hyper-V, Sharepoint patched up to June 2009 cumulative update) from both Visual Studio and command line, so the problem is not about improper use of SPContext.Current or any other part of the API that needs to be run in a web server context. I've ruled out permissions issues, because the build agent account can deploy the solution and create site collections just fine with stsadm. The next culprit could be that the unit tests were being run with a 32-bit process, which couldn't access the 64-bit Sharepoint API properly. I tried a workaround, but it has the side effect of disabling TFS support in MSTest. Do I have to wait for 2010 versions of MS tools (and hope for the best) or is there a third-party test framework available that runs natively in 64 bit and can publish test results into TFS 2008?

    Read the article

  • How to genrate a monochrome bit mask for a 32bit bitmap

    - by Mordachai
    Under Win32, it is a common technique to generate a monochrome bitmask from a bitmap for transparency use by doing the following: SetBkColor(hdcSource, clrTransparency); VERIFY(BitBlt(hdcMask, 0, 0, bm.bmWidth, bm.bmHeight, hdcSource, 0, 0, SRCCOPY)); This assumes that hdcSource is a memory DC holding the source image, and hdcMask is a memory DC holding a monochrome bitmap of the same size (so both are 32x32, but the source is 4 bit color, while the target is 1bit monochrome). However, this seems to fail for me when the source is 32 bit color + alpha. Instead of getting a monochrome bitmap in hdcMask, I get a mask that is all black. No bits get set to white (1). Whereas this works for the 4bit color source. My search-foo is failing, as I cannot seem to find any references to this particular problem. I have isolated that this is indeed the issue in my code: i.e. if I use a source bitmap that is 16 color (4bit), it works; if I use a 32 bit image, it produces the all-black mask. Is there an alternate method I should be using in the case of 32 bit color images? Is there an issue with the alpha channel that overrides the normal behavior of the above technique? Thanks for any help you may have to offer! ADDENDUM: I am still unable to find a technique that creates a valid monochrome bitmap for my GDI+ produced source bitmap. I have somewhat alleviated my particular issue by simply not generating a monochrome bitmask at all, and instead I'm using TransparentBlt(), which seems to get it right (but I don't know what they're doing internally that's any different that allows them to correctly mask the image). It might be useful to have a really good, working function: HBITMAP CreateTransparencyMask(HDC hdc, HBITMAP hSource, COLORREF crTransparency); Where it always creates a valid transparency mask, regardless of the color depth of hSource. Ideas?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >