Search Results

Search found 19958 results on 799 pages for 'bit fiddling'.

Page 43/799 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Need help diagnosing my machine

    - by Tom Collins
    I have something that just slows my computer to a crawl sometimes. Not running anything big. Yesterday all I had running (besides background apps) were Firefox & Windows Explorer and could barely even switch screens. Nothing showing up in the task manager as hogging CPUs. I have all non-essential services stopped (MySQl & MSSQL) unless I need them. I made some restore points not long ago, but they disappeared. This is a development mach with a LOT of apps installed, so I really, really do not want to re-install Windows. So, what I'm looking for are ideas or tools I can use to help diagnose this problem. The only clues I have is this started right after I installed Office 2013 (with Office 2010 still installed as well) installed Visual Studio 2012 (also keeping 2010 as a co-install) and installed MSSQL 2012 (upgrade from 2008, no co-install) Also, computer runs fine in Safe Mode. I've just ran out of ideas of what to check. Any help / suggestions would much appreciated. Thanks P.S. I'm running Win 7 Pro (x64). Office is also 64 bit. Visual Studio & MSSQL are 64 bit if that option was available (not sure).

    Read the article

  • Codec Problems with trying to edit videos with VirtualDub

    - by Roy Rico
    So, I'm a little frustrated. According to this post and various other internet sources, virtualdub is supposed to allow users to quickly split and join video files. I am using windows 7 64 Bit and the latest version of VirtualDub (64-bit). I have tried to edit various movie files, and each attempt at editing various files I have done has not worked for me. AVI file A.avi won't load, saying that it can't located the Decompressor for the "FMP4" format. I have tried this solution and this one, and neither of them work. I have tried setting the VFW Decompressor for 'Other MPEG4' setting to XVID or LIBAVCODEC. There is no change in virtual dub AVI file C.avi will load in Virtual Dub, but any attempt to split it gives me an error that I don't have XVID codecs installed. I've attempted to install the proper codecs (Shark's Windows 7 Codecs, CCCP) with no change. AVI file C.avi will load, and it will split, but won't split using the "Direct Stream Copy" claiming the compression algorithm is incompatible. I tried "Fast Recompress" option and it created a 27GB file out of what was supposed to be about a 300-400MB file. Can someone please give me some insight into what I'm messing up?

    Read the article

  • Codec Problems with trying to edit videos with VirtualDub

    - by Roy Rico
    So, I'm a little frustrated. According to this post and various other internet sources, virtualdub is supposed to allow users to quickly split and join video files. I am using windows 7 64 Bit and the latest version of VirtualDub (64-bit). I have tried to edit various movie files, and each attempt at editing various files I have done has not worked for me. AVI file A.avi won't load, saying that it can't located the Decompressor for the "FMP4" format. I have tried this solution and this one, and neither of them work. I have tried setting the VFW Decompressor for 'Other MPEG4' setting to XVID or LIBAVCODEC. There is no change in Virtual Dub. AVI file C.avi will load in Virtual Dub, but any attempt to split it gives me an error that I don't have XVID codecs installed. I've attempted to install the proper codecs (Shark's Windows 7 Codecs, CCCP) with no change. AVI file C.avi will load, and it will split, but won't split using the "Direct Stream Copy" claiming the compression algorithm is incompatible. I tried the "Fast Recompress" option and it created a 27GB file out of what was supposed to be about a 300-400MB file. Can someone please give me some insight into what I'm messing up?

    Read the article

  • DNS and name server in centos 6.3 64 bit is not pinged out side

    - by user135855
    I got a problem with centOS 6.3 64-bit. I want to setup my nameserver with bind here. I am listing all my configuration [root@izyon92 ~]# cat/etc/hosts -------------- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 182.19.26.92 izyon92.zyonize1.com izyon92 [root@izyon92 ~]# cat /etc/sysconfig/network --------------------------------------------- NETWORKING=yes HOSTNAME=izyon92.zyonize1.com GATEWAY=182.19.26.89 [root@izyon92 ~]# cat /etc/resolv.conf -------------------------------------------- # Generated by NetworkManager search zyonize1.com nameserver 182.19.26.92 [root@izyon92 ~]# cat /etc/named.conf -------------------------------------------- // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { #listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { none; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { 182.19.26.92; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; [root@izyon92 ~]# cat /etc/named.rfc1912.zones -------------------------------------------------- // named.rfc1912.zones: // // Provided by Red Hat caching-nameserver package // // ISC BIND named zone configuration for zones recommended by // RFC 1912 section 4.1 : localhost TLDs and address zones // and http://www.ietf.org/internet-drafts/draft-ietf-dnsop-default-local-zones-02.txt // (c)2007 R W Franks // // See /usr/share/doc/bind*/sample/ for example named configuration files. // zone "localhost.localdomain" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "localhost" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "1.0.0.127.in-addr.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "0.in-addr.arpa" IN { type master; file "named.empty"; allow-update { none; }; }; zone "zyonize1.com" { type master; file "/var/named/zyonize.com.hosts"; }; [root@izyon92 ~]# cat /var/named/zyonize.com.hosts --------------------------------------------------------- $ttl 38400 zyonize1.com. IN SOA 182.19.26.92. dev\.izyon.gmail.com. ( 1347436958 10800 3600 604800 38400 ) zyonize1.com. IN NS 182.19.26.92. zyonize1.com. IN A 182.19.26.92 www.zyonize1.com. IN A 182.19.26.92 izyon92.zyonize1.com. IN A 182.19.26.92 I have disabled selinux and stopped iptables. dig and nslookup is working fine in the same machine [root@izyon92 ~]# dig zyonize1.com ---------------------------------------- ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> zyonize1.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55751 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zyonize1.com. IN A ;; ANSWER SECTION: zyonize1.com. 38400 IN A 182.19.26.92 ;; AUTHORITY SECTION: zyonize1.com. 38400 IN NS 182.19.26.92. ;; Query time: 0 msec ;; SERVER: 182.19.26.92#53(182.19.26.92) ;; WHEN: Fri Sep 14 00:09:19 2012 ;; MSG SIZE rcvd: 72 [root@izyon92 ~]# nslookup zyonize1.com ---------------------------------------------- Server: 182.19.26.92 Address: 182.19.26.92#53 Name: zyonize1.com Address: 182.19.26.92 But here is the problem I am facing, I have windows machine, to test this dns and nameserver I set the first IPv4 DNS server to 182.19.26.92. Here is the details Connection-specific DNS Suffix: Description: Realtek PCIe GBE Family Controller Physical Address: ?14-FE-B5-9F-3A-A8 DHCP Enabled: No IPv4 Address: 192.168.2.50 IPv4 Subnet Mask: 255.255.255.0 IPv4 Default Gateway: 192.168.2.1 IPv4 DNS Servers: 182.19.26.92, 182.19.95.66 IPv4 WINS Server: NetBIOS over Tcpip Enabled: Yes Link-local IPv6 Address: fe80::45cc:2ada:c13:ca42%16 IPv6 Default Gateway: IPv6 DNS Server: when I am pining from this machine it is not finding the server. Where as in another server with another live IP with Fedora ping is working fine.

    Read the article

  • Half of installed RAM is hardware reserved

    - by user968270
    After a rather arduous and convoluted series of problems that left me without a desktop for ~80 days, I've finally got the thing up and running, having replaced the power supply, motherboard, graphics card and CPU. Now, however, I'm experiencing the 'hardware reserved RAM' issue. Perhaps this is the exhaustion talking, but looking at the question that tends to get pointed to when this kind of topic gets locked as a duplicate hasn't helped. I have 16 GB of RAM installed in an MSi 970A-G46, which is spec'd for up to 32 GB of RAM. The BIOS recognizes that I have 16 GB installed, and the resource monitor also shows the whole 16 GB, only it shows 8 GB as hardware reserved. I've seen suggestions that it's an OS issue, but the particular installation of Windows 7 (64-bit) which I'm running on my boot drive is the same as the one that could actually access the 16 GB in my previous motherboard (MSi 870A-G54). I've updated my BIOS using the MSi Live Update tool and restarted the machine with no effect, and I cannot seem to locate any 'Memory Remapping' option as I've seen mentioned. I've physically swapped the RAM between the slots to no effect. I've unchecked the Maximum Memory box in the msconfig Boot tab's advanced options, also to no effect. These are my system's basic specifications OS: Windows 7 Home Premium (64-Bit) Motherboard: MSi 970A-G46 CPU: AMD FX-8150 Graphics Card: XFX Radeon HD 6870 Boot Drive: OCZ Agility 3 Storage Drive: Samsung Spinpoint F3 ST1000DM005/HD103SJ 1TB PSU: Thermaltake TR-2 TR600 600W ATX12V v2.3

    Read the article

  • How does one convert 16-bit RGB565 to 24-bit RGB888?

    - by jleedev
    I’ve got my hands on a 16-bit rgb565 image (specifically, an Android framebuffer dump), and I would like to convert it to 24-bit rgb888 for viewing on a normal monitor. The question is, how does one convert a 5- or 6-bit channel to 8 bits? The obvious answer is to shift it. I started out by writing this: uint16_t buf; while (read(0, &buf, sizeof buf)) { unsigned char red = (buf & 0xf800) >> 11; unsigned char green = (buf & 0x07c0) >> 5; unsigned char blue = buf & 0x003f; putchar(red << 3); putchar(green << 2); putchar(blue << 3); } However, this doesn’t have one property I would like, which is for 0xffff to map to 0xffffff, instead of 0xf8fcf8. I need to expand the value in some way, but I’m not sure how that should work. The Android SDK comes with a tool called ddms (Dalvik Debug Monitor) that takes screen captures. As far as I can tell from reading the code, it implements the same logic; yet its screenshots are coming out different, and white is mapping to white. Here’s the raw framebuffer, the smart conversion by ddms, and the dumb conversion by the above algorithm. (By the way, this conversion is implemented in ffmpeg, but it’s just performing the dumb conversion listed above, leaving the LSBs at all zero.) I guess I have two questions: What’s the most sensible way to convert rgb565 to rgb888? How is DDMS converting its screenshots?

    Read the article

  • 8 bit enum, in C

    - by oxinabox.ucc.asn.au
    I have to store instuctions, commands that I will be receiving via serial. The commands will be 8 bits long. I'd like to use Enumerations to deal with them in my code. Only a enumeration corresponds to a ... on this platform I think a 16 bit integer. I need to preserve transparancy between command name, and its value. So as to avoid having to translate an 8-bit number received in serial into any type. BTW the platform is AVR ATmega169V microcontroller, on the Butterfly demo board. It may be being underclocked to preserve power (I'm opposed to this, I believe the ATmega169V uses no power, not next to a router. But that's getting offtopic.) So I need to keep things fast, and I don't have any luxuries like file I/O. Or operating systems. So any suggestions as to what type I should be using to store 8-bit commands? There has got to be something better than a massive header of #defines.

    Read the article

  • Visual Studio Macros on 64 bit fail with COM error

    - by bruce.kinchin
    I'm doing some javascript development and found a cool macro to region my code ("Using #region Directive With JavaScript Files in Visual Studio"). I used this on my 32 bit box, and it worked first time. (Visual Studio 2008 SP1, Win7) For easy of reference the macro is: Option Strict Off Option Explicit Off Imports System Imports EnvDTE Imports EnvDTE80 Imports System.Diagnostics Imports System.Collections Public Module JsMacros Sub OutlineRegions() Dim selection As EnvDTE.TextSelection = DTE.ActiveDocument.Selection Const REGION_START As String = "//#region" Const REGION_END As String = "//#endregion" DTE.ExecuteCommand("Edit.StopOutlining") selection.SelectAll() Dim text As String = selection.Text selection.StartOfDocument(True) Dim startIndex As Integer Dim endIndex As Integer Dim lastIndex As Integer = 0 Dim startRegions As Stack = New Stack() Do startIndex = text.IndexOf(REGION_START, lastIndex) endIndex = text.IndexOf(REGION_END, lastIndex) If startIndex = -1 AndAlso endIndex = -1 Then Exit Do End If If startIndex <> -1 AndAlso startIndex < endIndex Then startRegions.Push(startIndex) lastIndex = startIndex + 1 Else ' Outline region ... selection.MoveToLineAndOffset(CalcLineNumber(text, CInt(startRegions.Pop())), text.Length) selection.MoveToLineAndOffset(CalcLineNumber(text, endIndex) + 1, 1, True) selection.OutlineSection() lastIndex = endIndex + 1 End If Loop selection.StartOfDocument() End Sub Private Function CalcLineNumber(ByVal text As String, ByVal index As Integer) Dim lineNumber As Integer = 1 Dim i As Integer = 0 While i < index If text.Chars(i) = vbCr Then lineNumber += 1 i += 1 End If i += 1 End While Return lineNumber End Function End Module I then tried to use the same macro on two separate 64 bit machines (Win7 x64), identical other than the 64 bit OS version and it fails to work. Stepping through it with the Visual Studio Macros IDE, it fails the first time on the DTE.ExecuteCommand("Edit.StopOutlining") line with a COM error (Error HRESULT E_FAIL has been returned from a call to a COM component). If I attempt to run it a second time, I can run it from the Macro Editor with no issue, but not from within Visual Studio with the macro explorer 'run macro' command. I have reviewed the following articles without finding anything helpful: Stackoverflow: Visual Studio 2008 macro only works from the Macro IDE, not the Macro Explorer Recorded macro does not run; Failing on DTE.ExecuteCommand Am I missing something dumb?

    Read the article

  • FOR BOUNTY: "QFontEngine(Win) GetTextMetrics failed ()" error on 64-bit Windows

    - by David Murdoch
    I'll add a large bounty to this when Stack Overflow lets me. I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • "QFontEngine(Win) GetTextMetrics failed ()" error on 64-bit Windows

    - by David Murdoch
    I'll add 500 of my own rep as a bounty when SO lets me. I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • Hibernate bit array to entity mapping

    - by teabot
    I am trying to map a normalized Java model to a legacy database schema using Hibernate 3.5. One particular table encodes a foreign keys in a one-to-many relationship as a bit array column. Consider tables 'person' and 'clubs' that describes people's affiliations to clubs: person .----.------. club: .----.---------.---------------------------. | id | name | | id | name | members | binary(members) | |----+------| |----+---------|---------+-----------------| | 1 | Bob | | 10 | Cricket | 0 | 000 | | 2 | Joe | | 11 | Tennis | 5 | 101 | | 3 | Sue | | 12 | Cooking | 7 | 111 | '----'------' | 13 | Golf | 3 | 100 | '----'---------'---------'-----------------' So hopefully it is clear that person.id is used as the bit index in the bit array club.members. In this example the members column tells us that: no one is a member of Cricket, Bob/Sue - Tennis, Bob/Sue/Joe - Cooking and Sue - Golf. In my Java domain I'd like to declare this with entities like so: class Person { private int id; private String name; ... } class Club { private Set<Person> members; private int id; private String name; ... } I am assuming that I must use a UserType implementation but have been unable to find any examples where the items described by the user type are references to entities - not literal field values - or composites thereof. Additionally I am aware that I'll have to consider how the person entities are fetched when a club instance is loaded. Can anyone tell me how I can tame this legacy schema with Hibernate?

    Read the article

  • Strip parity bits in C from 8 bits of data followed by 1 parity bit

    - by dubnde
    I have a buffer of bits with 8 bits of data followed by 1 parity bit. This pattern repeats itself. The buffer is currently stored as an array of octets. Example (p are parity bits): 0001 0001 p000 0100 0p00 0001 00p01 1100 ... should become 0001 0001 0000 1000 0000 0100 0111 00 ... Basically, I need to strip of every ninth bit to just obtain the data bits. How can I achieve this? This is related to another question asked here sometime back. This is on a 32 bit machine so the solution to the related question may not be applicable. The maximum possible number of bits is 45 i.e. 5 data octets This is what I have tried so far. I have created a "boolean" array and added the bits into the array based on the the bitset of the octet. I then look at every ninth index of the array and through it away. Then move the remaining array down one index. Then I've got only the data bits left. I was thinking there may be better ways of doing this.

    Read the article

  • Equivalent Carbon 32-bit call for using in 64-bit application - GetApplicationEventTarget().

    - by Dheeraj
    Hi All, I'm writing a 64-bit Cocoa application. I need to register for global key events. So I wrote this piece of code : - (void)awakeFromNib { EventHotKeyRef gMyHotKeyRef; EventHotKeyID gMyHotKeyID; EventTypeSpec eventType; eventType.eventClass=kEventClassKeyboard; eventType.eventKind=kEventHotKeyPressed; eventType.eventClass=kEventClassKeyboard; eventType.eventKind=kEventHotKeyPressed; InstallApplicationEventHandler(&MyHotKeyHandler,1,&eventType,NULL,NULL); gMyHotKeyID.signature='htk1'; gMyHotKeyID.id=1; RegisterEventHotKey(49, cmdKey+optionKey, gMyHotKeyID, **GetApplicationEventTarget**(), 0, &gMyHotKeyRef); } But since GetApplicationEventTarget() is not supported for 64-bit applications I'm getting errors. If I declare it, then I don't get any errors but the application crashes. Is there any equivalent method for GetApplicationEventTarget() (defined in Carbon framework) to use in 64-bit applications. Or is there any way to get the global key events using cocoa calls? Any help is appreciated. Thanks, Dheeraj.

    Read the article

  • Add two 32-bit integers in Assembler for use in VB6

    - by Emtucifor
    I would like to come up with the byte code in assembler (assembly?) for Windows machines to add two 32-bit longs and throw away the carry bit. I realize the "Windows machines" part is a little vague, but I'm assuming that the bytes for ADD are pretty much the same in all modern Intel instruction sets. I'm just trying to abuse VB a little and make some things faster. So... if the string "8A4C240833C0F6C1E075068B442404D3E0C20800" is the assembly code for SHL that can be "injected" into a VB6 program for a fast SHL operation expecting two Long parameters (we're ignoring here that 32-bit longs in VB6 are signed, just pretend they are unsigned), what is the hex string of bytes representing assembler instructions that will do the same thing to return the sum? The hex code above for SHL is, according to the author: mov eax, [esp+4] mov cl, [esp+8] shl eax, cl ret 8 I spit those bytes into a file and tried unassembling them in a windows command prompt using the old debug utility, but I figured out it's not working with the newer instruction set because it didn't like EAX when I tried assembling something but it was happy with AX. I know from comments in the source code that SHL EAX, CL is D3E0, but I don't have any reference to know what the bytes are for instruction ADD EAX, CL or I'd try it. I tried flat assembler and am not getting anything I can figure out how to use. I used it to assemble the original SHL code and got a very different result, not the same bytes. Help?

    Read the article

  • Detection of negative integers using bit operations

    - by Nawaz
    One approach to check if a given integer is negative or not, could be this: (using bit operations) int num_bits = sizeof(int) * 8; //assuming 8 bits per byte! int sign_bit = given_int & (1 << (num_bits-1)); //sign_bit is either 1 or 0 if ( sign_bit ) { cout << "given integer is negative"<<endl; } else { cout << "given integer is positive"<<endl; } The problem with this solution is that number of bits per byte couldn't be 8, it could be 9,10, 11 even 16 or 40 bits per byte. Byte doesn't necessarily mean 8 bits! Anyway, this problem can be easily fixed by writing, //CHAR_BIT is defined in limits.h int num_bits = sizeof(int) * CHAR_BIT; //no assumption. It seems fine now. But is it really? Is this Standard conformant? What if the negative integer is not represented as 2's complement? What if it's representation in a binary numeration system that doesn't necessitate only negative integers to have 1 in it's most significant bit? Can we write such code that will be both portable and standard conformant? Related topics: Size of Primitive data types Why is a boolean 1 byte and not 1 bit of size?

    Read the article

  • setBit java method using bit shifting and hexadecimal code - question

    - by somewhat_confused
    I am having trouble understanding what is happening in the two lines with the 0xFF7F and the one below it. There is a link here that explains it to some degree. http://www.herongyang.com/java/Bit-String-Set-Bit-to-Byte-Array.html I don't know if 0xFF7FposBit) & oldByte) & 0x00FF are supposed to be 3 values 'AND'ed together or how this is supposed to be read. If anyone can clarify what is happening here a little better, I would greatly appreciate it. private static void setBit(byte[] data, final int pos, final int val) { int posByte = pos/8; int posBit = pos%8; byte oldByte = data[posByte]; oldByte = (byte) (((0xFF7F>>posBit) & oldByte) & 0x00FF); byte newByte = (byte) ((val<<(8-(posBit+1))) | oldByte); data[posByte] = newByte; } passed into this method as parameters from a selectBits method was setBit(out,i,val); out = is byte[] out = new byte[numOfBytes]; (numOfBytes can be 7 in this situation) i = which is number [57], the original number from the PC1 int array holding the 56-integers. val = which is the bit taken from the byte array from the getBit() method.

    Read the article

  • How to compare a memory bits in C++?

    - by Trunet
    Hi, I need help with a memory bit comparison function. I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my arduino mega2560. There're no code available for this chipset(it's not the same as HT1632) and I'm writing on my own. I have a plot function that get x,y coordinates and a color and that pixel turn on. Only this is working perfectly. But I need more performance on my display so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this(getShadowRam) on plot function my display has some, just SOME(like 3 or 4 on entire display) ghost pixels(pixels that is not supposed to be turned on). If I just comment the prev_color if's on my plot function it works perfectly. Also, I'm cleaning my shadowRam array setting all matrix to zero. variables: #define BLACK 0 #define GREEN 1 #define RED 2 #define ORANGE 3 #define CHIP_MAX 8 byte shadowRam[63][CHIP_MAX-1] = {0}; getShadowRam function: byte HT1632C::getShadowRam(byte x, byte y) { byte addr, bitval, nChip; if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } bitval = 8>>(y&3); x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) { return ORANGE; } else if (shadowRam[addr][nChip-1] & bitval) { return GREEN; } else if (shadowRam[addr+32][nChip-1] & bitval) { return RED; } else { return BLACK; } } plot function: void HT1632C::plot (int x, int y, int color) { if (x<0 || x>X_MAX || y<0 || y>Y_MAX) return; if (color != BLACK && color != GREEN && color != RED && color != ORANGE) return; char addr, bitval; byte nChip; byte prev_color = HT1632C::getShadowRam(x,y); bitval = 8>>(y&3); if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); switch(color) { case BLACK: if (prev_color != BLACK) { // compare with memory to only set if pixel is other color // clear the bit in both planes; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case GREEN: if (prev_color != GREEN) { // compare with memory to only set if pixel is other color // set the bit in the green plane and clear the bit in the red plane; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case RED: if (prev_color != RED) { // compare with memory to only set if pixel is other color // clear the bit in green plane and set the bit in the red plane; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case ORANGE: if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color // set the bit in both the green and red planes; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; } } If helps: The datasheet of board I'm using. On page 7 has the memory mapping I'm using. Also, I have a video of display working.

    Read the article

  • Browsing mapped network drives in Aptana Studio - Windows 7

    - by Marco
    I've recently started using Windows 7 (64-bit) at work, but after installing Aptana like usual, and mapping my network folders like I always have, Aptana shows the mapped drives, but with a red X on the drive icon. Using the native windows explorer I can browse the drives fine, and I don't need to login. If it matters the mapped drives are hosted on both Windows and Linux servers. Any ideas on what to do? My googling is drawing blanks.

    Read the article

  • All office apps crash immediately on server 2008

    - by Tim
    Hi, I have a brand new windows 2008 server (64 bit) with a brand new installation of Office 2007, fully patched with all windows updates etc. Every time I try to run any of the office apps it crashes immediately, even in safe mode. the only remotely useful information I get is: Exception Code: c0000005 Exception Data: 00000008 If I Run in compatibility mode for windows XP, everything is fine. Anyone ever seen this before? I've tried turning off DEP but that made no difference either Thanks Tim

    Read the article

  • AIX Checklist for stable obiee deployment

    - by user554629
    Common AIX configuration issues     ( last updated 27 Aug 2012 ) OBIEE is a complicated system with many moving parts and connection points.The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators. The information in this article is time sensitive, and updated as I discover new  issues or details. What makes OBIEE different? When Tech Support suggests AIX component upgrades to a stable, locked-down production AIX environment, it is common to get "push back".  "Why is this necessary?  We aren't we seeing issues with other software?"It's a fair question that I have often struggled to answer; here are the talking points: OBIEE is memory intensive.  It is the entire purpose of the software to trade memory for repetitive, more expensive database requests across a network. OBIEE is implemented in C++ and is very dependent on the C++ runtime to behave correctly. OBIEE is aggressively thread efficient;  if atomic operations on a particular architecture do not work correctly, the software crashes. OBIEE dynamically loads third-party database client libraries directly into the nqsserver process.  If the library is not thread-safe, or corrupts process memory the OBIEE crash happens in an unrelated part of the code.  These are extremely difficult bugs to find. OBIEE software uses 99% common source across multiple platforms:  Windows, Linux, AIX, Solaris and HPUX.  If a crash happens on only one platform, we begin to suspect other factors.  load intensity, system differences, configuration choices, hardware failures.  It is rare to have a single product require so many diverse technical skills.   My role in support is to understand system configurations, performance issues, and crashes.   An analyst trained in Business Analytics can't be expected to know AIX internals in the depth required to make configuration choices.  Here are some guidelines. AIX C++ Runtime must be at  version 11.1.0.4$ lslpp -L | grep xlC.aixobiee software will crash if xlC.aix.rte is downlevel;  this is not a "try it" suggestion.Nov 2011 11.1.0.4 version  is appropriate for all AIX versions ( 5, 6, 7 )Download from here:https://www-304.ibm.com/support/docview.wss?uid=swg24031426 No reboot is necessary to install, it can even be installed while applications are using the current version.Restart the apps, and they will pick up the latest version. AIX 5.3 Technology Level 12 is required when running on Power5,6,7 processorsAIX 6.1 was introduced with the newer Power chips, and we have seen no issues with 6.1 or 7.1 versions.Customers with an unstable deployment, dozens of unexplained crashes, became stable after the upgrade.If your AIX system is 5.3, the minimum TL level should be at or higher than this:$ oslevel -s  5300-12-03-1107IBM typically supports only the two latest versions of AIX ( 6.1 and 7.1, for example).  AIX 5.3 is still supported and popular running in an LPAR. obiee userid limits$ ulimit -Ha  ( hard limits )$ ulimit -a   ( default limits )core file size (blocks)     unlimiteddata seg size (kbytes)      unlimitedfile size (blocks)          unlimitedmax memory size (kbytes)    unlimitedopen files                  10240 cpu time (seconds)          unlimitedvirtual memory (kbytes)     unlimitedIt is best to establish the values in /etc/security/limitsroot user is needed to observe and modify this file.If you modify a limit, you will need to relog in to change it again.  For example,$ ulimit -c 0$ ulimit -c 2097151cannot modify limit: Operation not permitted$ ulimit -c unlimited$ ulimit -c0There are only two meaningful values for ulimit -c ; zero or unlimited.Anything else is likely to produce a truncated core file that cannot be analyzed. Deploy 32-bit or 64-bit ?Early versions of OBIEE offered 32-bit or 64-bit choice to AIX customers.The 32-bit choice was needed if a database vendor did not supply a 64-bit client library.That's no longer an issue and beginning with OBIEE 11, 32-bit code is no longer shipped.A common error that leads to "out of memory" conditions to to accept the 32-bit memory configuration choices on 64-bit deployments.  The significant configuration choices are: Maximum process data (heap) size is in an AIX environment variableLDR_CNTRL=IGNOREUNLOAD@LOADPUBLIC@PREREAD_SHLIB@MAXDATA=0x... Two thread stack sizes are made in obiee NQSConfig.INI[ SERVER ]SERVER_THREAD_STACK_SIZE = 0;DB_GATEWAY_THREAD_STACK_SIZE = 0; Sort memory in NQSConfig.INI[ GENERAL ]SORT_MEMORY_SIZE = 4 MB ;SORT_BUFFER_INCREMENT_SIZE = 256 KB ; Choosing a value for MAXDATA:0x080000000  2GB Default maximum 32-bit heap size ( 8 with 7 zeros )0x100000000  4GB 64-bit breaking even with 32-bit ( 1 with 8 zeros )0x200000000  8GB 64-bit double 32-bit max0x400000000 16GB 64-bit safetyUsing 2GB heap size for a 64-bit process will almost certainly lead to an out-of-memory situation.Registers are twice as big ... consume twice as much memory in the heap.Upgrading to a 4GB heap for a 64-bit process is just "breaking even" with 32-bit.A 32-bit process is constrained by the 32-bit virtual addressing limits.  Heap memory is used for dynamic requirements of obiee software, thread stacks for each of the configured threads, and sometimes for shared libraries. 64-bit processes are not constrained in this way;  extra heap space can be configured for safety against a query that might create a sudden requirement for excessive storage.  If the storage is not available, this query might crash the whole server and disrupt existing users.There is no performance penalty on AIX for configuring more memory than required;  extra memory can be configured for safety.  If there are no other considerations, start with 8GB.Choosing a value for Thread Stack size:zero is the value documented to select an appropriate default for thread stack size.  My preference is to change this to an absolute value, even if you intend to use the documented default;  it provides better documentation and removes the "surprise" factor.There are two thread types that can be configured. GATEWAY is used by a thread pool to call a database client library to establish a DB connection.The default size is 256KB;  many customers raise this to 512KB ( no performance penalty for over-configuring ). This value must be set to 1 MB if Teradata connections are used. SERVER threads are used to run queries.  OBIEE uses recursive algorithms during the analysis of query structures which can consume significant thread stack storage.  It's difficult to provide guidance on a value that depends on data and complexity.  The general notion is to provide more space than you think you need,  "double down" and increase the value if you run out, otherwise inspect the query to understand why it is too complex for the thread stack.  There are protections built into the software to abort a single user query that is too complex, but the algorithms don't cover all situations.256 KB  The default 32-bit stack size.  Many customers increased this to 512KB on 32-bit.  A 64-bit server is very likely to crash with this value;  the stack contains mostly register values, which are twice as big.512 KB  The documented 64-bit default.  Some early releases of obiee didn't set this correctly, resulting in 256KB stacks.1 MB  The recommended 64-bit setting.  If your system only ever uses 512KB of stack space, there is no performance penalty for using 1MB stack size.2 MB  Many large customers use this value for safety.  No performance penalty.nqscheduler does not use the NQSConfig.INI file to set thread stack size.If this process crashes because the thread stack is too small, use this to set 2MB:export OBI_BACKGROUND_STACK_SIZE=2048 Shared libraries are not (shared) When application libraries are loaded at run-time, AIX makes a decision on whether to load the libraries in a "public" memory segment.  If the filesystem library permissions do not have the "Read-Other" permission bit, AIX loads the library into private process memory with two significant side-effects:* The libraries reduce the heap storage available.      Might be significant in 32-bit processes;  irrelevant in 64-bit processes.* Library code is loaded into multiple real pages for execution;  one copy for each process.Multiple execution images is a significant issue for both 32- and 64-bit processes.The "real memory pages" saved by using public memory segments is a minor concern.  Today's machines typically have plenty of real memory.The real problem with private copies of libraries is that they consume processor cache blocks, which are limited.   The same library instructions executing in different real pages will cause memory delays as the i-cache ( instruction cache 128KB blocks) are refreshed from real memory.   Performance loss because instructions are delayed is something that is difficult to measure without access to low-level cache fault data.   The machine just appears to be running slowly for no observable reason.This is an easy problem to detect, and an easy problem to correct.Detection:  "genld -l" AIX command produces a list of the libraries used by each process and the AIX memory address where they are loaded.32-bit public segment is 13 ( "dxxxxxxx" ).   private segments are 2-a.64-bit public segment is 9 ( "9xxxxxxxxxxxxxxx") ; private segment is 8.genld -l | grep -v ' d| 9' | sort +2provides a list of privately loaded libraries. Repair: chmod o+r <libname>AIX shared libraries will have a suffix of ".so" or ".a".Another technique is to change all libraries in a selected directory to repair those that might not be currently loaded.   The usual directories that need repair are obiee code, httpd code and plugins, database client libraries and java.chmod o+r /shr/dir/*.a /shr/dir/*.so Configure your system for diagnosticsProduction systems shouldn't crash, and yet bad things happen to good software.If obiee software crashes and produces a core, you should configure your system for reliable transfer of the failing conditions to Oracle Tech Support.  Here's what we need to be able to diagnose a core file from your system.* fullcore enabled. chdev -lsys0 -a fullcore=true* core naming enabled. chcore -n on -d* ulimit must not truncate core. see item 3.* pstack.sh is used to capture core documentation.* obidoc is used to capture current AIX configuration.* snapcore  AIX utility captures core and libraries. Use the proper syntax. $ snapcore -r corename executable-fullpath   /tmp/snapcore will contain the .pax.Z output file.  It is compressed.* If cores are directed to a common directory, ensure obiee userid can write to the directory.  ( chcore -p /cores -d ; chmod 777 /cores )The filesystem must have sufficient space to hold a crashing obiee application.Use:  df -k  Check the "Free" column ( not "% Used" )  8388608 is 8GB. Disable Oracle Client Library signal handlingThe Oracle DB Client Library is frequently distributed with the sqlplus development kit.By default, the library enables a signal handler, which will document a call stack if the application crashes.   The signal handler is not needed, and definitely disruptive to obiee diagnostics.   It needs to be disabled.   sqlnet.ora is typically located at:   $ORACLE_HOME/network/admin/sqlnet.oraAdd this line at the top of the file:   DIAG_SIGHANDLER_ENABLED=FALSE Disable async query in the RPD connection pool.This might be an obiee 10.1.3.4 issue only ( still checking  )."async query" must be disabled in the connection pools.It was designed to enable query cancellation to a database, and turned out to have too many edge conditions in normal communication that produced random corruption of data and crashes.  Please ensure it is turned off in the RPD. Check AIX error report (errpt).Errors external to obiee applications can trigger crashes.  $ /bin/errpt -aHardware errors ( firmware, adapters, disks ) should be reported to IBM support.All application core files are recorded by AIX;  the most recent ones are listed first. Reserved for something important to say.

    Read the article

  • Package managers for Windows

    - by mezei.zoltan
    You might be familiar with Ninite. What I'd like to know is if there are good alternatives to that software for Windows. The features I expect: installs the latest version of software supports 64 bit installs where possible strips ads/toolbars/similar stuff provides a way to keep the programs updated after installation if I can add custom installers to the software, that's a big plus. Any ideas if such a program exists?

    Read the article

  • Where's my Open-With gVim context menu option in Windows 7?

    - by David Mackintosh
    I have gVim installed. Under Vista and XP, this offered me an addition to either the object context menu of "Edit with gVim", or an addtion to the "Open With" context menu of "gVim". This would let me send arbitrary files to gVim for editing. Under Windows 7 64-bit, I have installed gVim -- twice, as it happens -- and there's no menu item. How do I add an option to send arbitrary files to gVim for viewing/editing?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >