Search Results

Search found 19878 results on 796 pages for 'bit pirate'.

Page 53/796 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • PHP: Replace umlauts with closest 7-bit ASCII equivalent in an UTF-8 string

    - by BlaM
    What I want to do is to remove all accents and umlauts from a string, turning "lärm" into "larm" or "andré" into "andre". What I tried to do was to utf8_decode the string and then use strtr on it, but since my source file is saved as UTF-8 file, I can't enter the ISO-8859-15 characters for all umlauts - the editor inserts the UTF-8 characters. Obviously a solution for this would be to have an include that's an ISO-8859-15 file, but there must be a better way than to have another required include? echo strtr(utf8_decode($input), 'ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ', 'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy'); UPDATE: Maybe I was a bit inaccurate with what I try to do: I do not actually want to remove the umlauts, but to replace them with their closest "one character ASCII" aequivalent.

    Read the article

  • About enumerations in Delphi and c++ in 64-bit environments

    - by sum1stolemyname
    I recently had to work around the different default sizes used for enumerations in Delphi and c++ since i have to use a c++ dll from a delphi application. On function call returns an array of structs (or records in delphi), the first element of which is an enum. To make this work, I use packed records (or aligned(1)-structs). However, since delphi selects the size of an enum-variable dynamically by default and uses the smallest datatype possible (it was a byte in my case), but C++ uses an int for enums, my data was not interpreted correctly. Delphi offers a compiler switch to work around this, so the declaration of the enum becomes {$Z4} TTypeofLight = ( V3d_AMBIENT, V3d_DIRECTIONAL, V3d_POSITIONAL, V3d_SPOT ); {$Z1} My Questions are: What will become of my structs when they are compiled on/for a 64-bit environment? Does the default c++ integer grow to 8 Bytes? Are there other memory alignment / data type size modifications (other than pointers)?

    Read the article

  • Cheap windows driver signing for 64 bit Windows 7

    - by kayahr
    I need to install the libusb-win32 driver on Windows 7 64 bit machines. This driver is open source so it is not digitally signed so I want to do this my self but I wonder if this can be done WITHOUT paying lot of money. Is it possible to use a certificate which is NOT signed by Verisign or GlobalSign? Maybe self-signed or by using StartSSL instead? And if yes, how do I do it? According to this howto I have to use a "cross-certificate" (And there are only six available on the Microsoft list and most of them are for CAs which are no longer active) I don't care if the user is confronted with a warning message. I can even accept if the user has to install a special CA certificate first. I only require that the driver runs without manually disabling the signature check on each windows startup.

    Read the article

  • Help porting a bit of Prototype JavaScript to jQuery

    - by ewall
    I have already implemented some AJAX pagination in my Rails app by using the example code for the will_paginate plugin--which is apparently using Prototype. But if I wanted to switch to using jQuery for future additions, I really don't want to have the Prototype stuff sitting around too (yes, I know it's possible). I haven't written a lick of JavaScript in years, let alone looked into Prototype and jQuery... so I could use some help converting this bit into jQuery-compatible syntax: document.observe("dom:loaded", function() { // the element in which we will observe all clicks and capture // ones originating from pagination links var container = $(document.body) if (container) { var img = new Image img.src = '/images/spinner.gif' function createSpinner() { return new Element('img', { src: img.src, 'class': 'spinner' }) } container.observe('click', function(e) { var el = e.element() if (el.match('.pagination a')) { el.up('.pagination').insert(createSpinner()) new Ajax.Request(el.href, { method: 'get' }) e.stop() } }) } }) Thanks in advance!

    Read the article

  • Starting with Liferay and a bit overwhelmed with how to begin

    - by hairbymaurice
    Hello I would like to start developing a lifeRay theme and am a little bit lost! I am a Mac user and i have installed liferay and also Xcode but i am not clear how to begin. I have downloaded the SDK for liferay but i do not understand how to install it or use it for that matter, so questions: Is Xcode an appropriate development environment to work with or is something else a little easier to get on with? Does Xcode build in the same way ANT does? How do i install the SDK? Do i just drop it into Tomcat and away i go? Yes i am very new to all this!! I am not actually sure if i am asking the right questions

    Read the article

  • how many color combinations in a 24 bit image

    - by numerical25
    I am reading a book and I am not sure if its a mistake or I am misunderstanding the quote. It reads... Nowadays every PC you can buy has hardware that can render images with at least 16.7 million individual colors. Rather than have an array with thousands of color entries, the images instead contain explicit color values for each pixel. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel, for color information. This gives 1 byte, or 256 distinct values each, for red, green, and blue. This is generally called true color, because 256^3 (16.7 million) He says 1 byte is equal to 256 distinct values. 1 byte = 8 bits. 8^2 bits = 64 distinct colors right ?? It's not adding up right to me. I know it might be something simple to understand, but I don't understand.

    Read the article

  • how many color combinations in a 24 bit image

    - by numerical25
    I am reading a book and I am not sure if its a mistake or I am misunderstanding the quote. It reads... Nowadays every PC you can buy has hardware that can render images with at least 16.7 million individual colors. Rather than have an array with thousands of color entries, the images instead contain explicit color values for each pixel. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel, for color information. This gives 1 byte, or 256 distinct values each, for red, green, and blue. This is generally called true color, because 256^3 (16.7 million) He says 1 byte is equal to 256 distinct values. 1 byte = 8 bits. 8^2 bits = 64 combinations of colors right ?? It's not adding up right to me. I know it might be something simple to understand, but I don't understand.

    Read the article

  • Building 16 bit os - character array not working

    - by brainbarshan
    Hi. I am building a 16 bit operating system. But character array does not seem to work. Here is my example kernel code: asm(".code16gcc\n"); void putchar(char); int main() { char *str = "hello"; putchar('A'); if(str[0]== 'h') putchar('h'); return 0; } void putchar(char val) { asm("movb %0, %%al\n" "movb $0x0E, %%ah\n" "int $0x10\n" : :"m"(val) ) ; } It prints: A that means putchar function is working properly but if(str[0]== 'h') putchar('h'); is not working. I am compiling it by: gcc -fno-toplevel-reorder -nostdinc -fno-builtin -I./include -c -o ./bin/kernel.o ./source/kernel.c ld -Ttext=0x9000 -o ./bin/kernel.bin ./bin/kernel.o -e 0x0 What should I do?

    Read the article

  • Scan for first zero bit (Assembly)?

    - by cthulhu
    I have some numbers in AH, AL, BL, BH registers. I need to check whether there is 0 bit in each of the registers in left half of the number. If yes, then put into check variable value 10 else -10. How can I do this? I tried something like that: org 100h check dw 0 mov ah, 11111111b mov al, 11111111b mov bl, 11111111b mov bh, 11111111b mov check, -10 shr ah, 4 shr al, 4 shr bl, 4 shr bh, 4 cmp ah, 0Fh jz first first: cmp al, 0Fh jz second second: cmp bl, 0Fh jz third third: cmp bh, 0Fh jz final final: mov check, 10 ret

    Read the article

  • Django ORM dealing with MySQL BIT(1) field

    - by Carles Barrobés
    In a Django application, I'm trying to access an existing MySQL database created with Hibernate (a Java ORM). I reverse engineered the model using: $ manage.py inspectdb > models.py This created a nice models file from the Database and many things were quite fine. But I can't find how to properly access boolean fields, which were mapped by Hibernate as columns of type BIT(1). The inspectdb script by default creates these fields in the model as TextField and adds a comment saying that it couldn't reliably obtain the field type. I changed these to BooleanField but it doesn't work (the model objects always fetch a value of true for these fields). Using IntegerField won't work as well (e.g. in the admin these fields show strange non-ascii characters). Any hints of doing this without changing the database? (I need the existing Hibernate mappings and Java application to still work with the database).

    Read the article

  • c++ file bad bit

    - by user230911
    Hi, when I run this code, the open and seekg and tellg operation all success. but when I read it, it fails, the eof,bad,fail bit are 0 1 1. What can cause a file bad? thanks int readriblock(int blockid, char* buffer) { ifstream rifile("./ri/reverseindex.bin", ios::in|ios::binary); rifile.seekg(blockid * RI_BLOCK_SIZE, ios::beg); if(!rifile.good()){ cout<<"block not exsit"<<endl; return -1;} cout<<rifile.tellg()<<endl; rifile.read(buffer, RI_BLOCK_SIZE); **cout<<rifile.eof()<<rifile.bad()<<rifile.fail()<<endl;** if(!rifile.good()){ cout<<"error reading block "<<blockid<<endl; return -1;} rifile.close(); return 0; }

    Read the article

  • Signed and unsigned, and how bit extension works in C

    - by hatorade
    unsigned short s; s = 0xffff; int i = s; How does the extension work here? 2 larger order bytes are added, but I'm confused whether 1's or 0's are extended there. This is probably platform dependent so let's focus on what Unix does. Would the two bigger order bytes of the int be filled with 1's or 0's, and why? Basically, does the computer know that s is unsigned, and correctly assign 0's to the higher order bits of the int? So i is now 0x0000ffff? Or since ints are default signed in unix does it take the signed bit from s (a 1) and copy that to the higher order bytes?

    Read the article

  • unused memory using 32 bit integer in C

    - by endmade
    I have the folowing struct of integers (32 bit environment): struct rgb { int r; int g; int b; }; Am I correct in saying that, since rgb component values (0-255) only require 8-bits(1 byte) to be represented, I am only using 1 byte of memory and leaving 3 bytes unused for each component? Also, if I instead did the following: struct rgb{ unsigned int r:8; unsigned int g:8; unsigned int b:8; }; Assuming that what I said above is correct, would using this new struct reduce the number of unused bytes to 1?

    Read the article

  • WPF: Menu items and combo boxes don't render in Windows 7 64-bit

    - by lilserf
    I'm trying to use an existing internal WPF application (I do have access to the source), but it was developed on XP and I'm using Windows7 64-bit. When I click (for instance) the File menu, 90% of the time I see no drop-down menu at all. The menu still exists - I can use the arrow keys to navigate up and down and choose an option if I happen to know the order of the options, but nothing renders at all. The other 10% of the time, the menu or some portion of it DOES render, but as I move the cursor up and down I get graphical corruption or disappearing options until I end up back at the "no menu is visible at all" state. This is also true of combo boxes within the application - they show no data when I drop them down, but I can arrow down and choose an entry. Microsoft has some advice about WPF rendering issues here but none of these steps has helped with my issue.

    Read the article

  • Reverse Data With BIT TYPE for MS SQL

    - by Milacay
    I have a column using a BIT type (1/0). I have some records are set to 1 and some are set to 0. Those are record flag needs to be reversed. So basically, I want all records with 1 set 0, and all records with 0 set to 1. If I run "Update Table1 Set Flag = 1 Where Flag = 0" first, then I am afraid all record flags will be 1 now, and will not able to know which ones are flag = 0. any suggestions, Thanks!

    Read the article

  • Erlang bit syntax variable issue

    - by Jimmy Ruska
    Is there any way to format this so it's a valid expression, without adding another step? <<One:8,_:(One*8)>> = <<1,9>>. * 1: illegal bit size These work <<One:8,_:(1*8)>> = <<1,9>>. <<1,9>> <<Eight:8,_:Eight>> = <<8,9>>. <<8,9>> I'm trying to parse a binary with nested data with list comprehensions instead of stacking accumulators.

    Read the article

  • Android:simulating 1-bit display

    - by user1681805
    I'm new to Android,trying to build a simple game which use 1-bit black and white display.the screen dimension is 160 * 80,that is 12800 pixels.I created a byte array for the "VRAM",so each time it draws,it first checks the array. The thing is that I am not drawing a point or rectangle for each pixel,I'm using 2 bitmaps(ARGB_4444,I have to use alpha channel,because of shadow effect),1 for positive and 1 for negative.So I called 12800 times drawBitmap() in the surfaceView's Draw method.I know that's silly...But even for openGL,12800 quards won't be that fast right? Sorry..I cannot post imgs.the link of screenshot:http://i1014.photobucket.com/albums/af267/baininja/Screenshot_2013-10-22-01-05-36_zps91dbcdef.png should i totally give up this and draw points on a 160*80 bitmap then scale it to intented size?But that loses the visual effects.

    Read the article

  • Structure within union and bit field

    - by java
    #include <stdio.h> union u { struct st { int i : 4; int j : 4; int k : 4; int l; } st; int i; } u; int main() { u.i = 100; printf("%d, %d, %d", u.i, u.st.i, u.st.l); } I'm trying to figure out the output of program. The first outputs u.i = 100 but I can't understand the output for u.st.i and u.st.l. Please also explain bit fields.

    Read the article

  • addSubview insertSubview aboveSubview bit confused as to why it does not work

    - by John Smith
    I'm a bit confused all as to why the belowSubview does not work. I'm adding some (subviews) to my navigationController and all of that works well. -(UITableView *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { ... ... [self.navigationController.view addSubview:ImageView]; [self.navigationController.view addSubview:toolbar]; add some point in my app I wish to add another toolbar or image above my toolbar. so let's say I'm doing something like this -(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { ... ... [self.navigationController.view insertSubview:NewImageView aboveSubview:toolbar]; //crucial of course [edit] rvController = [RootViewController alloc] initWithNibName:@"RootViewController" bundle:[NSBundle] mainBundle]; rvController.CurrentLevel += 1; rvController.CurrentTitle = [dictionary objectForKey:@"Title"]; [self.navigationController pushViewController:rvController animated:YES]; rvController.tableDataSource = Children; [rvController release]; However this doesn't work.. Does anyone know what I'm doing wrong here ... Should I have used something else instead of addSubview or is the problem somewhere else?

    Read the article

  • Windows 7 64 Bit - ODBC32 - Legacy App Problem

    - by Arturo Caballero
    Good day StackOverFlowlers, I´m a little stuck (really stuck) with an issue with a legacy application on my organization. I have a Windows 7 Enterprise 64 Bit machine, Access 2000 Installed and the Legacy App (Is built with something like VB but older) The App uses System ODBC in order to connect to a SQL 2000 DataBase on a Remote Server. I created the ODCB using C:\Windows\SysWOW64\odbcad32.exe app in order to create a System DSN. I did not use the Windows 7 because it is not visible to the Legacy App. I tested the ODBC connection with Access and worked ok, I can access the remote database. Then I run the legacy App as Administrator and the App can see the ODBC, but I´m getting errors on credential validation and I´m getting this error: DIAG [08001] [Microsoft][ODBC SQL Server Driver][Multi-Protocol]SQL Server does not exist or access denied. (17) DIAG [01000] [Microsoft][ODBC SQL Server Driver][Multi-Protocol]ConnectionOpen (Connect()). (53) DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed (0) I use Trusted Connection on the ODBC in order to validate the user by Domain Controller. I think that the credentials are not being sent by the Legacy App to the ODBC, or something like that. I don´t have the source code of the Legacy App in order to debug the connection. Also, I turned off the Firewall. Any ideas?? Thanks in advance!

    Read the article

  • C# TCP Async EndReceive() throws InvalidOperationException ONLY on Windows XP 32-bit

    - by James Farmer
    I have a simple C# Async Client using a .NET socket that waits for timed messages from a local Java server used for automating commands. The messages come in asynchronously and is written to a ring buffer. This implementation seems to work fine on Windows Vista/7/8 and OSX, but will randomly throw this exception while it's receiving a message from the local Java server: Unhandled Exception: System.InvalidOperationException: EndReceive can only be called once for each asynchronous operation.     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult, SocketError& errorCode)     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)     at SocketTest.Controller.RecvAsyncCallback(IAsyncResult ar)     at System.Net.LazyAsyncResult.Complete(IntPtr userToken)     ... I've looked online for this error, but have found nothing really helpful. This is the code where it seems to break: /// <summary> /// Callback to receive socket data /// </summary> /// <param name="ar">AsyncResult to pass to End</param> private void RecvAsyncCallback(IAsyncResult ar) { // The exception will randomly happen on this call int bytes = _socket.EndReceive(_recvAsyncResult); // check for connection closed if (bytes == 0) { return; } _ringBuffer.Write(_buffer, 0, bytes); // Checks buffer CheckBuffer(); _recvAsyncResult = _sock.BeginReceive(_buffer, 0, _buffer.Length, SocketFlags.None, RecvAsyncCallback, null); } The error doesn't happen on any particular moment except in the middle of receiving a message. The message itself can be any length for this to happen, and the exception can happen right away, or sometimes even up to a minute of perfect communication. I'm pretty new with sockets and network communication, and I feel I might be missing something here. I've tested on at least 8 different computers, and the only similarity with the computers that throw this exception is that their OS is Windows XP 32-bit.

    Read the article

  • 32 bit depth jpg images problem in IE when referenced locally

    - by Stefan
    We have an webbapplication that takes an image that will be uploaded and resized. The resize-library we used saved all pictures with 32-bit depth whatever the depth was before. We have an online client that can view the pictures via an html-file and all is fine there. All pictures are shown correctly. The problem: We also have an vb-winform application that download the pictures and show them in an html-file locally in an webbrowser control. But here all pictures are rejected (not rendered), just the red cross. If we create an static html-file with img-tags in them locally, its the same. All pictures that has 32-bits depth are shown as red crosses. If we resave the pictures with 24-bits depth it magically works again. So ofcourse that was our "workaround", let the resize-library save all pictures with 24-bits depth instead. Summary: 32-bits jpg files shows correct in IE when online but not when referenced locally in a local html-file. (This is true for IE8 on both winxp and windows7). The same local html-file opened in mozilla showed OK. Question: I have googled this a lot but has not found anything about this "problem". Is this a bug in IE8?

    Read the article

  • One position right barrel shift using ALU Operators?

    - by Tomek
    I was wondering if there was an efficient way to perform a shift right on an 8 bit binary value using only ALU Operators (NOT, OR, AND, XOR, ADD, SUB) Example: input: 00110101 output: 10011010 I have been able to implement a shift left by just adding the 8 bit binary value with itself since a shift left is equivalent to multiplying by 2. However, I can't think of a way to do this for shift right. The only method I have come up with so far is to just perform 7 left barrel shifts. Is this the only way?

    Read the article

  • Trimming bit of the beginning off a recorder waveform

    - by Lowgain
    I've got a flash 10.1 app that lets me record microphone input to a wav without a media server, which I am saving to an Amazon S3 bucket. I have another process running on a server which gets wavs from this bucket, converts to mp3 using LAME and puts them into another bucket. This all works fine, but in converting wav mp3, about 0.1sec or so of silence is added to my sound. In the application this are being used in, perfect sync is critical, so I need to trim off that little bit. If I have to trim it off the original waveform that is okay, I don't expect anything important to happen in that first fraction of a second. What is the best way to go about this? I am using Adobe's WavWriter to convert by ByteArray into a proper waveform. Is there a way I can easily trim off the first few samples from my ByteArray without invalidating the structure? Alternatively, is there a good server-side tool I can use to trim the wav before running it through LAME, or an argument I can give LAME? Or, could I even trim that sound off the mp3 after it has been converted? Thanks!

    Read the article

  • java.util.BitSet -- set() doesn't work as expected

    - by dwhsix
    Am I missing something painfully obvious? Or does just nobody in the world actually use java.util.BitSet? The following test fails: @Test public void testBitSet() throws Exception { BitSet b = new BitSet(); b.set(0, true); b.set(1, false); assertEquals(2, b.length()); } It's really unclear to me why I don't end up with a BitSet of length 2 and the value 10. I peeked at the source for java.util.BitSet, and on casual inspection it seems to fail to make sufficient distinction between a bit that's been set false and a bit that has never been set to any value... (Note that explicitly setting the size of the BitSet in the constructor has no effect, e.g.: BitSet b = new BitSet(2);

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >