Search Results

Search found 11264 results on 451 pages for 'utf 32'.

Page 143/451 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • .Net Memory limit

    - by Prashant
    I have a .Net application running on a 32 bit box. The application is a windows service. It consistently hovers around 600-800 MB range. Is this a problem. If an application crosses 1 GB, is it a memory issue ?

    Read the article

  • CRC for PNG file format

    - by darkie15
    Hi All, I need to read a PNG file and interpret all the information stored in it and print it in human readable format. While working on PNG, i understood that it uses CRC-32 for generating checksum for each chunk. But I could not understand the following information mentioned on the PNG file specification site: The polynomial used by PNG is: x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + x + 1 Here is the link for reference: http://www.w3.org/TR/PNG/ Can anyone please help me in understanding this? Regards, darkie

    Read the article

  • F#'s fshtmldoc.exe using Mono/OS X

    - by John Clements
    This is doubtless something obvious, but downloading the F# PowerPack from codeplex and running fshtmldoc produces this error: clements$ mono ./fshtmldoc.exe FSharp.PowerPack.dll Processing 'FSharp.PowerPack.dll'... Unexpected failure while writing HTML docs: An exception was thrown by the type initializer for Microsoft.FSharp.Metadata.AssemblyLoader This is using mono 2.6.3, F# 2.0 1.9.9.9, & OS X 10.6.3 on a 32-bit intel processor. Any help would be appreciated. Many thanks, John Clements (repost from powerpack online discussion group--no response there)

    Read the article

  • smallest filesize for transparent gif

    - by zaf
    I'm looking for the smallest (in terms of filesize) transparent 1 pixel image. Currently I have a gif of 49 bytes which seems to be the most popular. But I remember many years ago having one which was less than 40 bytes. Could have been 32 bytes. Can anyone do better? Graphics format is no concern as long as modern web browsers can display it and respect the transparency.

    Read the article

  • Windows batch files: .bat vs .cmd?

    - by Chris Noe
    As I understand it, .bat is the old 16-bit naming convention, and .cmd is for 32-bit Windows, i.e., starting with NT. But I continue to see .bat files everywhere, and they seem to work exactly the same using either suffix. Assuming that my code will never need to run on anyhting older than NT, does it really matter which way I name my batch files, or is there some gotcha awaiting me by using the wrong suffix?

    Read the article

  • How to open DataSet in Visual Studio 2008 faster?

    - by Ekkapop
    When I open DataSet in Visual Studio 2008 to design or modify it, it always take a very long time (more than five minutes) before I can continue to do my job. While I'm waiting I can't do anything on Visual Studio, moreover CPU and memory usage is growth dramatically. I want to know, Is it has anyway to reduce this waiting time? Hardware - Desktop CPU: Intel Q6600 Memory: 4 GB HDD: 320 GB 7200 rpm OS: Windows XP 32 bit with Service Pack 3

    Read the article

  • wpf window ghosting problem when resizing over two screens

    - by bluebit
    The title pretty much describes it. If I resize my WPF app so that it stretches over two monitors in a dual monitor setup, and resize it back, there will be a ghost window in the second monitor that does nothing, but is still moved when I move the original window in the first screen. Has anyone had issues like this? I think its a refresh bug on some OSs (I use WINXP 32 bit), but would like to confirm with the community.

    Read the article

  • Listing available COM Objects with Powershell

    - by outtacontrol
    I am currently using the following script to list the available COM Objects on my machine. $path = "REGISTRY::HKEY_CLASSES_ROOT\CLSID\*\PROGID" foreach ($obj in dir $path) { write-host $obj.GetValue("") } I read on another website that the existence of the InProcServer32 key is evidence that the object is 64 bit compatible. So using powershell how can I determine the existence of InProcServer32 for each COM Object? If that is even the correct way of establishing whether it is 32 bit or 64 bit.

    Read the article

  • what does macosx-version-min imply?

    - by Quincy
    When I pass compiler flag "-mmacosx-version-min=10.5", what does it mean? I think it implies the result binary is x86, not ppc, but is it 32 bits or 64 bits? I'm compiling on snow leopard, so default output binary is 64 bits. I'm not passing -universal, it's not 32bit-64bit universal binary, I think.

    Read the article

  • Error using Ksoap2 library in Android

    - by Joseph82
    I'm trying to use KSoap2 library. I put the file Ksoap2-android-assembly-2.5.7-jar-with-dependencies.jar in the directory workspace myproject libs Then I setted it in build path. (libraries) I have error just when I run the application: 11-27 10:32:56.260: E/dalvikvm(1593): Could not find class 'org.ksoap2.serialization.SoapObject', referenced from method associated to this code line: SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); I tried also with the steps suggested in this thread exception while using ksoap2 library for android whitout success

    Read the article

  • FireBird .net provider 64bit

    - by Lavinski
    I'm trying to get a firebird web application (IIS6 64 bit) to run. However I'm getting bad image format (bit difference incompatability) issues. Has anyone got any advice to get it running. Details AnyCPU application references the .net firebird driver (through nhibernate) which uses a native 64bit dll. There is a native 32bit dll which I use for local development and it works fine. (I havn't got the 32 version working on the 64 bit server either).

    Read the article

  • Fedora 13 post security update boot problem

    - by Alex
    Hello. About a month ago I installed a security update that had new Kernek 2.6.34.x from 2.6.33.x), this is when the problem occurred for the first time. After the install my computer would not boot at all, black screen without any visible hard drive activity (I gave it good 30 minutes on black screen, before took actions)... I poped in installation DVD and went in rescue mode to change back the boot option to old kernel (was just a guess where the problem was). After restart computer loaded just file, took a long time for it to start because of SELinux targeted policy relabel is required. Relabeling could take very long time depending on file size. I assumed that the update got messed up somehow and continued working with modified boot option. Couple of days ago, there was another kernel update. I installed it and same problem as before. This rules out corrupted update theory... Black screen right after 'BIOS' screen before OS gets loaded. I had to rescue system again... Below is copy of my grub.conf file. I am fairly new to LINUX (couple of years of experience), mostly development and basic config... nothing crazy. # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_obalyuk-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda default=2 timeout=0 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.34.6-54.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.34.6-54.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.34.6-54.fc13.i686.PAE.img title Fedora (2.6.34.6-47.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.34.6-47.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.34.6-47.fc13.i686.PAE.img title Fedora (2.6.33.8-149.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.33.8-149.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.33.8-149.fc13.i686.PAE.img I like my system to be up to date... Let me know if I can post any other files that can be of help. Has anyone else had this problem? Does anyone has any ideas how to fix this problem? p.s. Anything helps, you ppl are great! thx for ur time.

    Read the article

  • XML Parsing Error - C#

    - by Indebi
    My code is having an XML parsing error at line 7 position 32 and I'm not really sure why Exact Error Dump 5/1/2010 10:21:42 AM System.Xml.XmlException: An error occurred while parsing EntityName. Line 7, position 32. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.Throw(String res, Int32 lineNo, Int32 linePos) at System.Xml.XmlTextReaderImpl.HandleEntityReference(Boolean isInAttributeValue, EntityExpandType expandType, Int32& charRefEndPos) at System.Xml.XmlTextReaderImpl.ParseAttributeValueSlow(Int32 curPos, Char quoteChar, NodeData attr) at System.Xml.XmlTextReaderImpl.ParseAttributes() at System.Xml.XmlTextReaderImpl.ParseElement() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlLoader.LoadNode(Boolean skipOverWhitespace) at System.Xml.XmlLoader.LoadDocSequence(XmlDocument parentDoc) at System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace) at System.Xml.XmlDocument.Load(XmlReader reader) at System.Xml.XmlDocument.Load(String filename) at Lookoa.LoadingPackages.LoadingPackages_Load(Object sender, EventArgs e) in C:\Users\Administrator\Documents\Visual Studio 2010\Projects\Lookoa\Lookoa\LoadingPackages.cs:line 30 Xml File, please note this is just a sample because I want the program to work before I begin to fill this repository <repo> <Packages> <TheFirstPackage id="00001" longname="Mozilla Firefox" appver="3.6.3" pkgver="0.01" description="Mozilla Firefox is a free and open source web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. A Net Applications statistic put Firefox at 24.52% of the recorded usage share of web browsers as of March 2010[update], making it the second most popular browser in terms of current use worldwide after Microsoft's Internet Explorer." cat="WWW" rlsdate="4/8/10" pkgloc="http://google.com"/> </Packages> <categories> <WWW longname="World Wide Web" description="Software that's focus is communication or primarily uses the web for any particular reason."> </WWW> <Fun longname="Entertainment & Other" description="Music Players, Video Players, Games, or anything that doesn't fit in any of the other categories."> </Fun> <Work longname="Productivity" description="Application's commonly used for occupational needs or, stuff you work on"> </Work> <Advanced longname="System & Security" description="Applications that protect the computer from malware, clean the computer, and other utilities."> </Advanced> </categories> </repo> Small part of C# Code //Loading the Package and Category lists //The info from them is gonna populate the listboxes for Category and Packages Repository.Load("repo.info"); XmlNodeList Categories = Repository.GetElementsByTagName("categories"); foreach (XmlNode Category in Categories) { CategoryNumber++; CategoryNames[CategoryNumber] = Category.Name; MessageBox.Show(CategoryNames[CategoryNumber]); } The Messagebox.Show() is just to make sure it's getting the correct results

    Read the article

  • Implement OAuth in Java

    - by phineas
    I made an an attempt to implement OAuth for my programming idea in Java, but I failed miserably. I don't know why, but my code doesn't work. Every time I run my program, an IOException is thrown with the reason "java.io.IOException: Server returned HTTP response code: 401" (401 means Unauthorized). I had a close look at the docs, but I really don't understand why it doesn't work. My OAuth provider I wanted to use is twitter, where I've registered my app, too. Thanks in advance phineas OAuth docs Twitter API wiki Class Base64Coder import java.io.InputStreamReader; import java.io.BufferedReader; import java.io.OutputStream; import java.io.IOException; import java.io.UnsupportedEncodingException; import java.net.URL; import java.net.URLEncoder; import java.net.URLConnection; import java.net.MalformedURLException; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import java.security.NoSuchAlgorithmException; import java.security.InvalidKeyException; public class Request { public static String read(String url) { StringBuffer buffer = new StringBuffer(); try { /** * get the time - note: value below zero * the millisecond value is used for oauth_nonce later on */ int millis = (int) System.currentTimeMillis() * -1; int time = (int) millis / 1000; /** * Listing of all parameters necessary to retrieve a token * (sorted lexicographically as demanded) */ String[][] data = { {"oauth_callback", "SOME_URL"}, {"oauth_consumer_key", "MY_CONSUMER_KEY"}, {"oauth_nonce", String.valueOf(millis)}, {"oauth_signature", ""}, {"oauth_signature_method", "HMAC-SHA1"}, {"oauth_timestamp", String.valueOf(time)}, {"oauth_version", "1.0"} }; /** * Generation of the signature base string */ String signature_base_string = "POST&"+URLEncoder.encode(url, "UTF-8")+"&"; for(int i = 0; i < data.length; i++) { // ignore the empty oauth_signature field if(i != 3) { signature_base_string += URLEncoder.encode(data[i][0], "UTF-8") + "%3D" + URLEncoder.encode(data[i][1], "UTF-8") + "%26"; } } // cut the last appended %26 signature_base_string = signature_base_string.substring(0, signature_base_string.length()-3); /** * Sign the request */ Mac m = Mac.getInstance("HmacSHA1"); m.init(new SecretKeySpec("CONSUMER_SECRET".getBytes(), "HmacSHA1")); m.update(signature_base_string.getBytes()); byte[] res = m.doFinal(); String sig = String.valueOf(Base64Coder.encode(res)); data[3][1] = sig; /** * Create the header for the request */ String header = "OAuth "; for(String[] item : data) { header += item[0]+"=\""+item[1]+"\", "; } // cut off last appended comma header = header.substring(0, header.length()-2); System.out.println("Signature Base String: "+signature_base_string); System.out.println("Authorization Header: "+header); System.out.println("Signature: "+sig); String charset = "UTF-8"; URLConnection connection = new URL(url).openConnection(); connection.setDoInput(true); connection.setDoOutput(true); connection.setRequestProperty("Accept-Charset", charset); connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded;charset=" + charset); connection.setRequestProperty("Authorization", header); connection.setRequestProperty("User-Agent", "XXXX"); OutputStream output = connection.getOutputStream(); output.write(header.getBytes(charset)); BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream())); String read; while((read = reader.readLine()) != null) { buffer.append(read); } } catch(Exception e) { e.printStackTrace(); } return buffer.toString(); } public static void main(String[] args) { System.out.println(Request.read("http://api.twitter.com/oauth/request_token")); } }

    Read the article

  • Simple caesar cipher in java

    - by Max Canlas
    Hey I'm making a simple caesar cipher in Java using the formula [x- (x+shift-1) mod 127 + 1] I want to have my encrypted text to have the ASCII characters except the control characters(i.e from 32-127). How can I avoid the control characters from 0-31 applying in the encrypted text. Thank you.

    Read the article

  • How to create small SPACES in HTML?

    - by karlthorwald
    There is em dash and en dash. Is there an "en" equivalent to &nbsp; ? Is there an en equivalent to pure Ascii 32? I want a better way to write this: 123<span class="spanen">&nbsp;</span>456<span class="spanen">&nbsp;</span>789 or this: 123<span class="spanen"> </span>456<span class="spanen"> </span>789

    Read the article

  • Freetype2 failing under WoW64

    - by Necrolis
    I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free. I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own): FT_Face pFace = NULL; FT_Error nError = 0; FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize)); if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0) { FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96); for(unsigned char c = 0; c < 95; c++) { if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER)) { FT_Glyph pGlyph; if(!FT_Get_Glyph(pFace->glyph,&pGlyph)) { LOG("GET: %c",c + 32); FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1); FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph); FT_Bitmap* pBitmap = &pGlyphMap->bitmap; const size_t nWidth = pBitmap->width; const size_t nHeight = pBitmap->rows; //add to texture atlas } } } } else { FT_Done_Face(pFace); delete pFont; return FALSE; } FT_Done_Face(pFace); delete pFont; return TRUE; } ARCHIVE_LoadFile returns blocks allocated with new. As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area. Update After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test. Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however. Update 2 After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?

    Read the article

  • memcpy vs assignment in C

    - by SetJmp
    Under what circumstances should I expect memcpys to outperform assignments on modern INTEL/AMD hardware? I am using GCC 4.2.x on a 32 bit Intel platform (but am interested in 64 bit as well).

    Read the article

  • GDI Image::Save returns Win32Error

    - by subbi
    Hi I am using GDI Image::Save Method to save the images to the file in my Application. I am getting Win32Error (7) status error in few instances with Vista 64 bit. It is working fine with vista 32 bits. and also this problem is coming randomly . Can you please suggest how to solve the problem Thanks in advance Regards Subbi Reddy

    Read the article

  • memcpy segmentation fault on linux but not os x

    - by Andre
    I'm working on implementing a log based file system for a file as a class project. I have a good amount of it working on my 64 bit OS X laptop, but when I try to run the code on the CS department's 32 bit linux machines, I get a seg fault. The API we're given allows writing DISK_SECTOR_SIZE (512) bytes at a time. Our log record consists of the 512 bytes the user wants to write as well as some metadata (which sector he wants to write to, the type of operation, etc). All in all, the size of the "record" object is 528 bytes, which means each log record spans 2 sectors on the disk. The first record writes 0-512 on sector 0, and 0-15 on sector 1. The second record writes 16-512 on sector 1, and 0-31 on sector 2. The third record writes 32-512 on sector 2, and 0-47 on sector 3. ETC. So what I do is read the two sectors I'll be modifying into 2 freshly allocated buffers, copy starting at record into buf1+the calculated offset for 512-offset bytes. This works correctly on both machines. However, the second memcpy fails. Specifically, "record+DISK_SECTOR_SIZE-offset" in the below code segfaults, but only on the linux machine. Running some random tests, it gets more curious. The linux machine reports sizeof(Record) to be 528. Therefore, if I tried to memcpy from record+500 into buf for 1 byte, it shouldn't have a problem. In fact, the biggest offset I can get from record is 254. That is, memcpy(buf1, record+254, 1) works, but memcpy(buf1, record+255, 1) segfaults. Does anyone know what I'm missing? Record *record = malloc(sizeof(Record)); record->tid = tid; record->opType = OP_WRITE; record->opArg = sector; int i; for (i = 0; i < DISK_SECTOR_SIZE; i++) { record->data[i] = buf[i]; // *buf is passed into this function } char* buf1 = malloc(DISK_SECTOR_SIZE); char* buf2 = malloc(DISK_SECTOR_SIZE); d_read(ad->disk, ad->curLogSector, buf1); d_read(ad->disk, ad->curLogSector+1, buf2); memcpy(buf1+offset, record, DISK_SECTOR_SIZE-offset); memcpy(buf2, record+DISK_SECTOR_SIZE-offset, offset+sizeof(Record)-sizeof(record->data));

    Read the article

  • Compile a COM program

    - by Fantomas
    Can COM program be 32 bit? How can I compile COM program? I have TLINK32 and TASM32. tasm32 \t alex_7.asm pause tlink32 alex_7.obj pause td32 main.exe I ve got following error: Fatal: 16 bit segments not supported in module alex_7.asm I habe DOSBOX and I'am running Windows 7 x64

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >