Search Results

Search found 7756 results on 311 pages for '64'.

Page 48/311 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • C++ shifting bits

    - by Bobby
    I am new to shifting bits, but I am trying to debug the following snippet: if (!(strcmp(arr[i].GetValType(), "f64"))) { dem_content_buff[BytFldPos] = tmp_data; dem_content_buff[BytFldPos + 1] = tmp_data >> 8; dem_content_buff[BytFldPos + 2] = tmp_data >> 16; dem_content_buff[BytFldPos + 3] = tmp_data >> 24; dem_content_buff[BytFldPos + 4] = tmp_data >> 32; dem_content_buff[BytFldPos + 5] = tmp_data >> 40; dem_content_buff[BytFldPos + 6] = tmp_data >> 48; dem_content_buff[BytFldPos + 7] = tmp_data >> 56; } I am getting a warning saying the lines with "32" to "56" have a shift count that is too large. The "f64" in the predicate just means that the data should be 64bit data. How should this be done?

    Read the article

  • portable way to deal with 64/32 bit time_t

    - by MK
    I have some code which is built both on Windows and Linux. Linux at this point is always 32bit but Windows is 32 and 64bit. Windows wants to have time_t be 64 bit and Linux still has it as 32 bit. I'm fine with that, except in some places time_t values are converted to strings. So when time_T is 32 bit it should be done with %d and when it is 64bit with %lld... what is the smart way to do this? Also: any ideas how I may find all places where time_t's are passed to printf-style functions to address this issue?

    Read the article

  • WPF: Menu items and combo boxes don't render in Windows 7 64-bit

    - by lilserf
    I'm trying to use an existing internal WPF application (I do have access to the source), but it was developed on XP and I'm using Windows7 64-bit. When I click (for instance) the File menu, 90% of the time I see no drop-down menu at all. The menu still exists - I can use the arrow keys to navigate up and down and choose an option if I happen to know the order of the options, but nothing renders at all. The other 10% of the time, the menu or some portion of it DOES render, but as I move the cursor up and down I get graphical corruption or disappearing options until I end up back at the "no menu is visible at all" state. This is also true of combo boxes within the application - they show no data when I drop them down, but I can arrow down and choose an entry. Microsoft has some advice about WPF rendering issues here but none of these steps has helped with my issue.

    Read the article

  • Shared memory of same DLL in different 32 bit processes is sometimes different in a terminal session

    - by KBrusing
    We have an 32 bit application consisting of some processes. They communicate with shared memory of a DLL used by every process. Shared memory is build with global variables in C++ by "#pragma data_seg ("Shared")". When running this application sometime during starting a new process in addition to an existing (first) process we observe that the shared memory of both processes is not the same. All new started processes cannot communicate with the first process. After stopping all of our processes and restarting the application (with some processes) everything works fine. But sometime or other after successfully starting and finishing new processes the problem occurs again. Running on all other Windows versions or terminal sessions on Windows server 2003 our application never got this problem. Is there any new "feature" on Windows server 2008 that might disturb the hamony of our application?

    Read the article

  • Makefile won't copy .o to obj/ and target to bin/ folders

    - by about blank
    I'm trying to write a Makefile which will copy its target and objects to bin/ and obj/ directories, respectively. Yet, when I try to run it I get the following error: nasm -f elf64 -g -F stabs main.asm -l spacelander.lst ld -o spacelander obj/main.o ld: cannot find obj/main.o: No such file or directory make: *** [spacelander] Error 1 Why is this happening? Update I noticed when I posted the error that it was due to white spacing errors. After taking care of those, I still get the new error I replaced with the old one I mentioned prior. What is this?? Update 2 Posted -d flag output below Makefile source. Source ASM := nasm ARGS := -f FMT := elf64 OPT := -g -F stabs SRC := main.asm OBJDIR := obj TARGETDIR := bin OBJ := $(addprefix $(OBJDIR)/,$(patsubst %.asm, %.o, $(wildcard *.asm))) TARGET := spacelander .PHONY: all clean all: $(OBJDIR) $(TARGET) $(OBJDIR): mkdir $(OBJDIR) $(OBJDIR)/%.o: $(SRC) $(ASM) $(ARGS) $(FMT) $(OPT) $(SRC) -l $(TARGET).lst $(TARGET): $(OBJ) ld -o $(TARGET) $(OBJ) clean: @rm -f $(TARGET) $(wildcard *.o) @rm -rf $(OBJDIR) make -d Output - NOTE: output is too many characters for body, thus is pastebinned http://pastebin.com/3bctGJxs

    Read the article

  • lshw tells me my processor is a 64 bits but my motherboard has a 32 bits width

    - by bpetit
    Recently I noticed lshw tells me a strange thing. Here is the first part of my lshw output: bpetit-1025c description: Notebook product: 1025C (1025C) vendor: ASUSTeK COMPUTER INC. version: x.x serial: C3OAAS000774 width: 32 bits capabilities: smbios-2.7 dmi-2.7 smp-1.4 smp configuration: boot=normal chassis=notebook cpus=2 family=Eee PC... *-core description: Motherboard product: 1025C vendor: ASUSTeK COMPUTER INC. physical id: 0 version: x.xx serial: EeePC-0123456789 slot: To be filled by O.E.M. *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: 1025C.0701 date: 01/06/2012 size: 64KiB capacity: 1984KiB capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd... *-cpu:0 description: CPU product: Intel(R) Atom(TM) CPU N2800 @ 1.86GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: 6.6.1 serial: 0003-0661-0000-0000-0000-0000 slot: CPU 1 size: 798MHz capacity: 1865MHz width: 64 bits clock: 533MHz capabilities: x86-64 boot fpu fpu_exception wp vme de pse tsc ... configuration: cores=2 enabledcores=1 id=2 threads=2 *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 24KiB capacity: 24KiB capabilities: internal write-back unified *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 512KiB capacity: 512KiB capabilities: internal varies unified *-logicalcpu:0 description: Logical CPU physical id: 2.1 width: 64 bits capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 2.2 width: 64 bits capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 2.3 width: 64 bits capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 2.4 width: 64 bits capabilities: logical *-memory description: System Memory physical id: 13 slot: System board or motherboard size: 2GiB *-bank:0 description: SODIMM [empty] product: [Empty] vendor: [Empty] physical id: 0 serial: [Empty] slot: DIMM0 *-bank:1 description: SODIMM DDR3 Synchronous 1066 MHz (0.9 ns) product: SSZ3128M8-EAEEF vendor: Xicor physical id: 1 serial: 00000004 slot: DIMM1 size: 2GiB width: 64 bits clock: 1066MHz (0.9ns) *-cpu:1 physical id: 1 bus info: cpu@1 version: 6.6.1 serial: 0003-0661-0000-0000-0000-0000 size: 798MHz capacity: 798MHz capabilities: ht cpufreq configuration: id=2 *-logicalcpu:0 description: Logical CPU physical id: 2.1 capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 2.2 capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 2.3 capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 2.4 capabilities: logical So here I see my processor is effectively a 64 bits one. However, I'm wondering how my motherboard can have a "32 bits width". I've browsed the web to find an answer, without success. I imagine it's just a technical fact that I don't know about. Thanks.

    Read the article

  • Could not connect to wireless unitl reboot (nl80211)

    - by user107410
    I'm using Samsung NP900X3C. I have problem with occasionally connecting to WIFI, with Ubuntu 12.10. Sometimes my computer could not connect to WIFI "blab", neither after reboot computer. Only solution is to restart WIFI hotspot. It's public WIFI, used by many users, that don't have that problem. My /var/log/syslog: Nov 12 10:09:39 k15 wpa_supplicant[1308]: wlan0: SME: Trying to authenticate with 64:70:02:89:7c:d7 (SSID='blab' freq=2427 MHz) Nov 12 10:09:39 k15 kernel: [ 8.908610] wlan0: authenticate with 64:70:02:89:7c:d7 Nov 12 10:09:39 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: scanning -> authenticating Nov 12 10:09:39 k15 kernel: [ 8.915032] wlan0: send auth to 64:70:02:89:7c:d7 (try 1/3) Nov 12 10:09:39 k15 wpa_supplicant[1308]: wlan0: Trying to associate with 64:70:02:89:7c:d7 (SSID='blab' freq=2427 MHz) Nov 12 10:09:39 k15 kernel: [ 8.916753] wlan0: authenticated Nov 12 10:09:39 k15 kernel: [ 8.916839] wlan0: waiting for beacon from 64:70:02:89:7c:d7 Nov 12 10:09:39 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: authenticating -> associating Nov 12 10:09:39 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: associating -> disconnected Nov 12 10:09:39 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: disconnected -> scanning Nov 12 10:09:42 k15 wpa_supplicant[1308]: wlan0: SME: Trying to authenticate with 64:70:02:89:7c:d7 (SSID='blab' freq=2427 MHz) Nov 12 10:09:42 k15 kernel: [ 12.386212] wlan0: authenticate with 64:70:02:89:7c:d7 Nov 12 10:09:42 k15 wpa_supplicant[1308]: wlan0: Trying to associate with 64:70:02:89:7c:d7 (SSID='blab' freq=2427 MHz) Nov 12 10:09:42 k15 kernel: [ 12.389114] wlan0: send auth to 64:70:02:89:7c:d7 (try 1/3) Nov 12 10:09:42 k15 kernel: [ 12.391021] wlan0: authenticated Nov 12 10:09:42 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: scanning -> authenticating Nov 12 10:09:42 k15 kernel: [ 12.391332] wlan0: waiting for beacon from 64:70:02:89:7c:d7 Nov 12 10:09:42 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: authenticating -> associating Nov 12 10:09:43 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: associating -> disconnected Nov 12 10:09:43 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: disconnected -> scanning Nov 12 10:09:46 k15 wpa_supplicant[1308]: wlan0: SME: Trying to authenticate with 64:70:02:89:7c:d7 (SSID='blab' freq=2427 MHz) and after restart WiFi, I could connect: Nov 12 10:11:51 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: inactive -> scanning Nov 12 10:11:55 k15 wpa_supplicant[1308]: wlan0: SME: Trying to authenticate with 64:70:02:89:7c:d7 (SSID='blab' freq=2427 MHz) Nov 12 10:11:55 k15 kernel: [ 144.445154] wlan0: authenticate with 64:70:02:89:7c:d7 Nov 12 10:11:55 k15 kernel: [ 144.453994] wlan0: send auth to 64:70:02:89:7c:d7 (try 1/3) Nov 12 10:11:55 k15 wpa_supplicant[1308]: wlan0: Trying to associate with 64:70:02:89:7c:d7 (SSID='blab' freq=2427 MHz) Nov 12 10:11:55 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: scanning -> authenticating Nov 12 10:11:55 k15 kernel: [ 144.455860] wlan0: authenticated Nov 12 10:11:55 k15 kernel: [ 144.458681] wlan0: associate with 64:70:02:89:7c:d7 (try 1/3) Nov 12 10:11:55 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: authenticating -> associating Nov 12 10:11:55 k15 kernel: [ 144.462799] wlan0: RX AssocResp from 64:70:02:89:7c:d7 (capab=0x431 status=0 aid=9) Nov 12 10:11:55 k15 kernel: [ 144.486368] wlan0: associated Nov 12 10:11:55 k15 wpa_supplicant[1308]: wlan0: Associated with 64:70:02:89:7c:d7 Nov 12 10:11:55 k15 kernel: [ 144.487435] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready Nov 12 10:11:55 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: associating -> associated Nov 12 10:11:55 k15 NetworkManager[1004]: <info> (wlan0): supplicant interface state: associated -> 4-way handshake This problem is appearing regulary. My WiFi device control is nl80211. Nov 12 10:09:32 k15 NetworkManager[1004]: <info> (wlan0): using nl80211 for WiFi device control Nov 12 10:09:32 k15 NetworkManager[1004]: <warn> (wlan0): driver supports Access Point (AP) mode Nov 12 10:09:32 k15 NetworkManager[1004]: <info> (wlan0): new 802.11 WiFi device (driver: 'iwlwifi' ifindex: 3) Nov 12 10:09:32 k15 NetworkManager[1004]: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/0 Nov 12 10:09:32 k15 NetworkManager[1004]: <info> (wlan0): now managed Nov 12 10:09:32 k15 NetworkManager[1004]: <info> (wlan0): device state change: unmanaged -> unavailable (reason 'managed') [10 20 2] Nov 12 10:09:32 k15 NetworkManager[1004]: <info> (wlan0): bringing up device.

    Read the article

  • How to be anonymous on IPV6 protocol by not using MAC address in EUI-64?

    - by iugamarian
    The IPV6 protocol has a feature called "Extended Unique Identifier" or EUI-64 witch in short uses the MAC address of the network card when choosing an IPV6 Adress. Proof: http://www.youtube.com/watch?v=30CnqRK0GHE&NR=1 at 7:36 video time. If you want to be anonymous on the internet (so that nobody can find you when you download something, etc.) you need this EUI-64 to be bipassed in order for the MAC address not to be discovered by harmful third parties on the internet and for privacy. How do you avoid EUI-64 MAC address usage in IPV6 selection in Ubuntu? Also for DHCP IPV6?

    Read the article

  • Adobe suspend temporairement le développement du lecteur Flash 10.1 64 bits pour Linux, à cause de p

    Mise à jour du 18.06.2010 par Katleen Adobe suspends temporairement le développement du lecteur Flash 10.1 64 bits pour Linux, à cause de problèmes avec le runtime Abode vient d'indiquer avoir suspendu son programme Labs de développement du lecteur Flash 10.1 en version 64 bits pour Linux. La compagnie déclare cependant rester "entièrement engagée dans la livraison d'un lecteur Flash 64 bits natif pour le bureau, via un support natif pour les plateformes Windows, Macintosh et Linux, dans une prochaine mise à jour majeure". D'après l'annonce officielle, cet arrêt ne serait que temporaire et dû a de gros problèmes survenus dans le runtime. Adobe réaliserait actuellement une refonte architectu...

    Read the article

  • Using pinvoke in c# to call sprintf and friends on 64-bit

    - by bde
    I am having an interesting problem with using pinvoke in C# to call _snwprintf. It works for integer types, but not for floating point numbers. This is on 64-bit Windows, it works fine on 32-bit. My code is below, please keep in mind that this is a contrived example to show the behavior I am seeing. class Program { [DllImport("msvcrt.dll", CharSet = CharSet.Unicode, CallingConvention = CallingConvention.Cdecl)] private static extern int _snwprintf([MarshalAs(UnmanagedType.LPWStr)] StringBuilder str, uint length, String format, int p); [DllImport("msvcrt.dll", CharSet = CharSet.Unicode, CallingConvention = CallingConvention.Cdecl)] private static extern int _snwprintf([MarshalAs(UnmanagedType.LPWStr)] StringBuilder str, uint length, String format, double p); static void Main(string[] args) { Double d = 1.0f; Int32 i = 1; Object o = (object)d; StringBuilder str = new StringBuilder(); _snwprintf(str, 32, "%10.1f", (Double)o); Console.WriteLine(str.ToString()); o = (object)i; _snwprintf(str, 32, "%10d", (Int32)o); Console.WriteLine(str.ToString()); Console.ReadKey(); } } The output of this program is 0.0 1 It should print 1.0 on the first line and not 0.0, and so far I am stumped.

    Read the article

  • Invalid length for a Base-64 char array.

    - by Code Sherpa
    As the title says, I am getting: Invalid length for a Base-64 char array. I have read about this problem on here and it seems that the suggestion is to store ViewState in SQL if it is large. I am using a wizard with a good deal of data collection so chances are my ViewSate is large. But, before I turn to the "store-in-DB" solution, maybe somebody can take a look and tell me if I have other options? I construct the email for delivery using the below method: public void SendEmailAddressVerificationEmail(string userName, string to) { string msg = "Please click on the link below or paste it into a browser to verify your email account.<BR><BR>" + "<a href=\"" + _configuration.RootURL + "Accounts/VerifyEmail.aspx?a=" + userName.Encrypt("verify") + "\">" + _configuration.RootURL + "Accounts/VerifyEmail.aspx?a=" + userName.Encrypt("verify") + "</a>"; SendEmail(to, "", "", "Account created! Email verification required.", msg); } The Encrypt method looks like this: public static string Encrypt(string clearText, string Password) { byte[] clearBytes = System.Text.Encoding.Unicode.GetBytes(clearText); PasswordDeriveBytes pdb = new PasswordDeriveBytes(Password, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 }); byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(32), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } On the receiving end, the VerifyEmail.aspx.cs page has the line: string username = Cryptography.Decrypt(_webContext.UserNameToVerify, "verify"); And the decrypt method looks like: public static string Decrypt(string cipherText, string password) { **// THE ERROR IS THROWN HERE!!** byte[] cipherBytes = Convert.FromBase64String(cipherText); Can this error be remedied with a code fix or must I store ViewState in the database? Thanks in advance.

    Read the article

  • Windows 7 64 Bit - ODBC32 - Legacy App Problem

    - by Arturo Caballero
    Good day StackOverFlowlers, I´m a little stuck (really stuck) with an issue with a legacy application on my organization. I have a Windows 7 Enterprise 64 Bit machine, Access 2000 Installed and the Legacy App (Is built with something like VB but older) The App uses System ODBC in order to connect to a SQL 2000 DataBase on a Remote Server. I created the ODCB using C:\Windows\SysWOW64\odbcad32.exe app in order to create a System DSN. I did not use the Windows 7 because it is not visible to the Legacy App. I tested the ODBC connection with Access and worked ok, I can access the remote database. Then I run the legacy App as Administrator and the App can see the ODBC, but I´m getting errors on credential validation and I´m getting this error: DIAG [08001] [Microsoft][ODBC SQL Server Driver][Multi-Protocol]SQL Server does not exist or access denied. (17) DIAG [01000] [Microsoft][ODBC SQL Server Driver][Multi-Protocol]ConnectionOpen (Connect()). (53) DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed (0) I use Trusted Connection on the ODBC in order to validate the user by Domain Controller. I think that the credentials are not being sent by the Legacy App to the ODBC, or something like that. I don´t have the source code of the Legacy App in order to debug the connection. Also, I turned off the Firewall. Any ideas?? Thanks in advance!

    Read the article

  • Draw on screen border in Commodore 64

    - by Stefano Borini
    Ok. I hope it does not get closed because I have this curiosity since 25 years and I would love to understand the trick. In the commodore 64 the border was not addressable by the 6569 VIC. All you could do was to draw pixels in the central area, the one where the cursor moved. The border was always uniform, although you could change its color with poke 53280,color if i remember correctly. Nevertheless I clearly remember games intros where the border was featured with graphics, like it was fully addressable. I tried to understand how it worked but never got to the point. legends say it was a clever use of sprites, which could, under some circumstances, be drawn on the border, but I don't know if it's an urban legend. edit: just read this from one of the provided links Sprites were multiplexed across vertical raster lines (over 8 sprites, sometimes up to 120 sprites). Until the Group Crest released Krestage 3 in May 2007 there was the common perception that no more than 8 sprites could appear at one raster line, but assigning new Y coordinates made it reappear further down the screen. This is evil.... you beat the raster and reposition the sprite before it gets there...

    Read the article

  • Windows Sidebar gadget not working in vista home premium(ie 64-bit OS)

    - by stanley
    Hi All, I have developed a windows sidebar gadget which plays videos in a flash player, It works in vista home basic(32-bit OS) but doesn't work in vista home premium(64-bit OS). I use Flash Player 9 and Actionscript 3.0. Can anyone help me Please. ***This is the html content for the player*** <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" width="130" height="200" id="FLVPlayer"> <param name="movie" value="test.swf" /> <param name="salign" value="lt" /> <param name="quality" value="high" /> <param name="scale" value="noscale" /> <param name="FlashVars" value="&MM_ComponentVersion=1&skinName=Clear_Skin_1&streamName=2973&autoPlay=true&autoRewind=true" /> <embed src="test.swf" flashvars="&MM_ComponentVersion=1&skinName=Clear_Skin_1&streamName=2973&autoPlay=true&autoRewind=true" quality="high" scale="noscale" width="130" height="200" name="FLVPlayer" salign="LT" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash" />

    Read the article

  • VERY simple C program won't compile with VC 64

    - by Paperflyer
    Here is a very simple C program: #include <stdio.h> int main (int argc, char *argv[]) { printf("sizeof(short) = %d\n",(int)sizeof(short)); printf("sizeof(int) = %d\n",(int)sizeof(int)); printf("sizeof(long) = %d\n",(int)sizeof(long)); printf("sizeof(long long) = %d\n",(int)sizeof(long long)); printf("sizeof(float) = %d\n",(int)sizeof(float)); printf("sizeof(double) = %d\n",(int)sizeof(double)); return 0; } While it compiles fine on Win32 (command line: cl main.c), it does not using the Win64 compiler ("c:\Program Files(x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\cl.exe" main.c). Specifically, it sais "error LNK2019: unresolved external symbol printf referenced in function main". As far as I understand this, it can not link to printf, right? Obviously, I have Microsoft Visual C++ Compiler 2008 (Standard enu) x86 and x64 installed and am using the 64-bit flavor of Windows (7). What is the problem here?

    Read the article

  • Excel 2010 64 bit can't create .net object

    - by aboes81
    I have a simple class library that I use in Excel. Here is a simplification of my class... using System; using System.Runtime.InteropServices; namespace SimpleLibrary { [ComVisible(true)] public interface ISixGenerator { int Six(); } public class SixGenerator : ISixGenerator { public int Six() { return 6; } } } In Excel 2007 I would create a macro enabled workbook and add a module with the following code: Public Function GetSix() Dim lib As SimpleLibrary.SixGenerator lib = New SimpleLibrary.SixGenerator Six = lib.Six End Function Then in Excel I could call the function GetSix() and it would return six. This no longer works in Excel 2010 64bit. I get a Run-time error '429': ActiveX component can't create object. I tried changing the platform target to x64 instead of Any CPU but then my code wouldn't compile unless I unchecked the Register for COM interop option, doing so makes it so my macro enable workbook cannot see SimpleLibrary.dll as it is no longer regsitered. Any ideas how I can use my library with Excel 2010 64 bit?

    Read the article

  • How to produce 64 bit masks?

    - by egiakoum1984
    Based on the following simple program the bitwise left shit operator works only for 32 bits. Is it true? #include <iostream> #include <stdlib.h> using namespace std; int main(void) { long long currentTrafficTypeValueDec; int input; cout << "Enter input:" << endl; cin >> input; currentTrafficTypeValueDec = 1 << (input - 1); cout << currentTrafficTypeValueDec << endl; cout << (1 << (input - 1)) << endl; return 0; } The output of the program: Enter input: 30 536870912 536870912 Enter input: 62 536870912 536870912 How could I produce 64-bit masks?

    Read the article

  • iPod touch has extremely slow wifi, drops packets - only on my router

    - by mskfisher
    I just purchased an iPod Touch. I am having a lot of trouble with its speeds on my Tenda W311R, but it has no speed problems on my neighbor's Netgear router. It will connect and authenticate to my network, but the Speed Test app from speedtest.net shows rates near 20-50 kbps. If I run the speed test immediately after powering the iPod on, it will get speeds of 10-20 Mbps, like it should - but the speeds slow down to the kbps range abut 10-15 seconds afterward. I get the same behavior with encryption and without encryption, and regardless of N, G, or B compatibility settings in the router. I've tried rebooting the iPod and resetting the network settings, but it's still slow. I've tried pinging the iPod from another computer, and it shows about 40% packet loss: $ ping 192.168.0.111 PING 192.168.0.111 (192.168.0.111): 56 data bytes 64 bytes from 192.168.0.111: icmp_seq=0 ttl=64 time=14.188 ms 64 bytes from 192.168.0.111: icmp_seq=1 ttl=64 time=11.556 ms 64 bytes from 192.168.0.111: icmp_seq=2 ttl=64 time=5.675 ms 64 bytes from 192.168.0.111: icmp_seq=3 ttl=64 time=5.721 ms Request timeout for icmp_seq 4 64 bytes from 192.168.0.111: icmp_seq=5 ttl=64 time=6.491 ms Request timeout for icmp_seq 6 64 bytes from 192.168.0.111: icmp_seq=7 ttl=64 time=8.065 ms Request timeout for icmp_seq 8 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 64 bytes from 192.168.0.111: icmp_seq=11 ttl=64 time=9.605 ms Signal strength is good - I'm never more than 20 feet from my access point, and it exhibits the same behavior if I'm standing next to the router. It works just well enough to receive text, but videos don't work at all. App downloads are hit and miss. I've tweaked just about all of the settings I can see to tweak, and I'm at a loss. I have also been searching Google for the past three days, all to no avail. Any suggestions?

    Read the article

  • compiling opencv 2.4 on a 64 bit mac in Xcode

    - by Walt
    I have an opencv project that I've been developing under ubuntu 12.04, on a parellels VM on a mac which has an x86_64 architecture. There have been many screen switching performance issues that I believe are due to the VM, where linux video modes flip around for a couple seconds while camera access is made by the opencv application. I decided to moved the project into Xcode on the mac side of the computer to continue the opencv development. However, I'm not that familiar with xcode and am having trouble getting the project to build correctly there. I have xcode installed. I downloaded and decompressed the latest version of opencv on the mac, and ran: ~/src/opencv/build/cmake-gui -G Xcode .. per the instructions from willowgarage and various other locations. This appeared to work fine (however I'm wondering now if I'm missing an architecture setting in here, although it is 64-bit intel in Xcode). I then setup an xcode project with the source files from the linux project and changed the include directories to use /opt/local/include/... rather than the /usr/local/include/... I switched xcode to use the LLVM GCC compiler in the build settings for the project then set the Apple LLVM Dialog for C++ to Language Dialect to GNU++11 (which seems possibly inconsistant with the line above) I'm not using a makefile in xcode, (that I'm aware of - it has its own project file...) I was also running into a linker issue that looked like they may be resolved with the addition of this linker flag: -lopencv_video based on a similar posting here: other thread however in that case the person was using a Makefile in their project. I've tried adding this linker flag under "Other Linker Flags" in xcode build settings but still get build errors. I think I may have two issues here, one with the architecture settings when building the opencv libraries with Cmake, and one with the linker flag settings in my project. Currently the build error list looks like this: Undefined symbols for architecture x86_64: "cv::_InputArray::_InputArray(cv::Mat const&)", referenced from: _main in main.o "cv::_OutputArray::_OutputArray(cv::Mat&)", referenced from: _main in main.o "cv::Mat::deallocate()", referenced from: cv::Mat::release() in main.o "cv::Mat::copySize(cv::Mat const&)", referenced from: cv::Mat::Mat(cv::Mat const&)in main.o cv::Mat::operator=(cv::Mat const&)in main.o "cv::Mat::Mat(_IplImage const*, bool)", referenced from: _main in main.o "cv::imread(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)", referenced from: _main in main.o ---SNIP--- ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status Can anyone provide some guidance on what to try next? Thanks, Walt

    Read the article

  • Invalid character in a Base-64 string when Concatenating and Url encoding a string

    - by Rob
    I’m trying to write some encryption code that is passed through a Url. For the sake of the issue I’ve excluded the actual encryption of the data and just shown the code causing the problem. I take a salt value, convert it to a byte array and then convert that to a base64 string. This string I concatenate to another base64 string (which was previously a byte array). These two base64 strings are then Url encoded. Here’s my code... using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Security.Cryptography; using System.IO; using System.Web; class RHEncryption { private static readonly Encoding ASCII_ENCODING = new System.Text.ASCIIEncoding(); private static readonly string SECRET_KEY = "akey"; private static string md5(string text) { return BitConverter.ToString(new MD5CryptoServiceProvider().ComputeHash(ASCII_ENCODING.GetBytes(text))).Replace("-", "").ToLower(); } public string UrlEncodedData; public RHEncryption() { // encryption object RijndaelManaged aes192 = new RijndaelManaged(); aes192.KeySize = 192; aes192.BlockSize = 192; aes192.Mode = CipherMode.CBC; aes192.Key = ASCII_ENCODING.GetBytes(md5(SECRET_KEY)); aes192.GenerateIV(); // convert Ivector to base64 for sending string base64IV = Convert.ToBase64String(aes192.IV); // salt value string s = "maryhadalittlelamb"; string salt = s.Substring(0, 8); // convert to byte array // and base64 for sending byte[] saltBytes = ASCII_ENCODING.GetBytes(salt.TrimEnd('\0')); string base64Salt = Convert.ToBase64String(saltBytes); //url encode concatenated base64 strings UrlEncodedData = HttpUtility.UrlEncode(base64Salt + base64IV, ASCII_ENCODING); } public string UrlDecodedData() { // decode the url encode string string s = HttpUtility.UrlDecode(UrlEncodedData, ASCII_ENCODING); // convert back from base64 byte[] base64DecodedBytes = null; try { base64DecodedBytes = Convert.FromBase64String(s); } catch (FormatException e) { Console.WriteLine(e.Message.ToString()); Console.ReadLine(); } return s; } } If I then call the UrlDecodedData method I get a “Invalid character in a Base-64 string” exception. This is generated because the base64Salt variable contains an invalid character (I’m guessing a line termination) but I can’t seem to strip it off.

    Read the article

  • Could my 64-bit server be somehow identifying itself as a 32-bit server?

    - by Deane
    Has anyone ever heard of a 64-bit OS identifying itself as a 32-bit OS? We have a Windows Server 2008 R2 x64 development server. We've been trying to activate it with a product key from MSDN, but it keeps telling us the the key is invalid. I've opened a ticket with MSDN for this. Then something odd happened -- I tried to install a 64-bit version of SQL Server 2005. After it extracted, we got this message: This version of hotfix.exe is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program... Now, we're pretty sure this is a 64-bit OS. Computer Properties says: System Type: 64-bit Operating System Also, we have both a "Program Files" and a "Program Files (x64)" directory. I don't know how the product key activator or the SQL install program attempts to divine the type of OS, but could it be...wrong?

    Read the article

  • why is 64 bits version called AMD64 and 32 bits version called i386?

    - by ajsie
    I have never understood this. This is what i know: 64 bits OS if you want to handle more than 2GB RAM. Else, 32 bits OS. So on Ubuntu's homepage you can download either 64 bits or 32 bits. But the 64 bits is called amd64 and the 32 bits is called i386. So i have to have a AMD processor to run amd64? And intel to run i386? And if someone codes a software (lets say Apache). Does he have to code one 32 bits and one 64 bits? Do some softwares only exist for 32 and not 64 and vice versa? Thanks in advance!

    Read the article

  • CUDA not working in 64 bit windows 7

    - by Programmer
    I have cuda toolkit 4.0 installed in a 64 bit windows 7. I try building my cuda code, #include<iostream> #include"cuda_runtime.h" #include"cuda.h" __global__ void kernel(){ } int main(){ kernel<<<1,1>>>(); int c = 0; cudaGetDeviceCount(&c); cudaDeviceProp prop; cudaGetDeviceProperties(&prop, 0); std::cout<<"the name is"<<prop.name; std::cout<<"Hello World!"<<c<<std::endl; system("pause"); return 0; } but operation fails. Below is the build log: Build Log Rebuild started: Project: god, Configuration: Debug|Win32 Command Lines Creating temporary file "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BAT0000482007500.bat" with contents [ @echo off echo "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" sample.cu "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\sample.cu" if errorlevel 1 goto VCReportError goto VCEnd :VCReportError echo Project : error PRJ0019: A tool returned an error code from "Compiling with CUDA Build Rule..." exit 1 :VCEnd ] Creating command line """c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BAT0000482007500.bat""" Creating temporary file "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\RSP0000492007500.rsp" with contents [ /OUT:"C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.exe" /LIBPATH:"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\lib\x64" /MANIFEST /MANIFESTFILE:"Debug\god.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.pdb" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 cudart.lib cuda.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib ".\Debug\sample.cu.obj" ] Creating command line "link.exe @"c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\RSP0000492007500.rsp" /NOLOGO /ERRORREPORT:PROMPT" Output Window Compiling with CUDA Build Rule... "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" sample.cu sample.cu sample.cu.obj : error LNK2019: unresolved external symbol _cudaLaunch@4 referenced in function "enum cudaError cdecl cudaLaunch(char *)" (??$cudaLaunch@D@@YA?AW4cudaError@@PAD@Z) sample.cu.obj : error LNK2019: unresolved external symbol ___cudaRegisterFunction@40 referenced in function "void __cdecl _sti_cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1(void)" (?sti__cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1@@YAXXZ) sample.cu.obj : error LNK2019: unresolved external symbol _cudaRegisterFatBinary@4 referenced in function "void __cdecl _sti_cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1(void)" (?sti__cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1@@YAXXZ) sample.cu.obj : error LNK2019: unresolved external symbol _cudaGetDeviceProperties@8 referenced in function _main sample.cu.obj : error LNK2019: unresolved external symbol _cudaGetDeviceCount@4 referenced in function _main sample.cu.obj : error LNK2019: unresolved external symbol _cudaConfigureCall@32 referenced in function _main C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.exe : fatal error LNK1120: 7 unresolved externals Results Build log was saved at "file://c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BuildLog.htm" god - 8 error(s), 0 warning(s) I will be highly obliged if someone could help me. Thanks

    Read the article

  • Strange performance behaviour for 64 bit modulo operation

    - by codymanix
    The last three of these method calls take approx. double the time than the first four. The only difference is that their arguments doesn't fit in integer anymore. But should this matter? The parameter is declared to be long, so it should use long for calculation anyway. Does the modulo operation use another algorithm for numbersmaxint? I am using amd athlon64 3200+, winxp sp3 and vs2008. Stopwatch sw = new Stopwatch(); TestLong(sw, int.MaxValue - 3l); TestLong(sw, int.MaxValue - 2l); TestLong(sw, int.MaxValue - 1l); TestLong(sw, int.MaxValue); TestLong(sw, int.MaxValue + 1l); TestLong(sw, int.MaxValue + 2l); TestLong(sw, int.MaxValue + 3l); Console.ReadLine(); static void TestLong(Stopwatch sw, long num) { long n = 0; sw.Reset(); sw.Start(); for (long i = 3; i < 20000000; i++) { n += num % i; } sw.Stop(); Console.WriteLine(sw.Elapsed); } EDIT: I now tried the same with C and the issue does not occur here, all modulo operations take the same time, in release and in debug mode with and without optimizations turned on: #include "stdafx.h" #include "time.h" #include "limits.h" static void TestLong(long long num) { long long n = 0; clock_t t = clock(); for (long long i = 3; i < 20000000LL*100; i++) { n += num % i; } printf("%d - %lld\n", clock()-t, n); } int main() { printf("%i %i %i %i\n\n", sizeof (int), sizeof(long), sizeof(long long), sizeof(void*)); TestLong(3); TestLong(10); TestLong(131); TestLong(INT_MAX - 1L); TestLong(UINT_MAX +1LL); TestLong(INT_MAX + 1LL); TestLong(LLONG_MAX-1LL); getchar(); return 0; } EDIT2: Thanks for the great suggestions. I found that both .net and c (in debug as well as in release mode) does't not use atomically cpu instructions to calculate the remainder but they call a function that does. In the c program I could get the name of it which is "_allrem". It also displayed full source comments for this file so I found the information that this algorithm special cases the 32bit divisors instead of dividends which was the case in the .net application. I also found out that the performance of the c program really is only affected by the value of the divisor but not the dividend. Another test showed that the performance of the remainder function in the .net program depends on both the dividend and divisor. BTW: Even simple additions of long long values are calculated by a consecutive add and adc instructions. So even if my processor calls itself 64bit, it really isn't :( EDIT3: I now ran the c app on a windows 7 x64 edition, compiled with visual studio 2010. The funny thing is, the performance behavior stays the same, although now (I checked the assembly source) true 64 bit instructions are used.

    Read the article

  • Quest releases NetVault Backup, Spotlight, Foglight, JClass, JProbe, Shareplex, Management Console and Authentication Services on Solaris 11

    - by user13333379
    Quest released the following products on Solaris 11 (SPARC, x64).: Quest NetVault Backup Server : v8.6.3, v8.6.1, v8.6  - Solaris 11, 10, 9 ; SPARC/x86/64 Quest NetVault Backup Client : v8.6.3, v8.6.1, v8.6  - Solaris 11, 10, 9 ; SPARC/x86/64 Quest Spotlight on Unix : v8.0 -Solaris 11, 10, 9  ; SPARC/x86/64 Quest Spotlight on Oracle : v9.0 - Solaris 11, 10, 9 ; SPARC/x86/64 Quest Authentication Services (formerly Vintela Authentication Services) : v4.0.3 - Solaris 11, 10, 9 ; SPARC/x86/64 Quest One Management Console for Unix (formerly Quest Identity Manager for Unix)  Solaris 11, 10, 9 ; SPARC/x86/64 Quest Foglight for Operating System : v5.6.5 -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest Foglight Agent Manager : v5.6.x -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest Foglight Cartridge for Infrastructure : v5.6.5 -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest JClass : v6.5 -Solaris 11, 10, 9  ; SPARC/x86/64  Quest JProbe : v9.5 -Solaris 11: x86  Quest Shareplex for Oracle : v7.6.3 : Solaris 11, 10, 9 ; SPARC/x86/64

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >