Search Results

Search found 4337 results on 174 pages for 'binary runner'.

Page 115/174 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • How to find out memory layout of your data structure implementation on Linux 64bit machine

    - by ajay
    In this article, http://cacm.acm.org/magazines/2010/7/95061-youre-doing-it-wrong/fulltext the author talks about the memory layouts of 2 data structures - The Binary Heap and the B-Heap and compares how one has better memory layout than the other. http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f5.jpg http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f6.jpg I want to get hands on experience on this. I have an implementation of a N-Ary Tree and I want to find out the memory layout of my data structure. What is the best way to come up with a memory layout like the one in the article? Secondly, I think it is easier to identify the memory layout if it is an array based implementation. If the implementation of a Tree uses pointers then what Tools do we have or what kind of approach is required to map it's memory layout? Thanks!

    Read the article

  • CMake missing environment variables errors

    - by Ben Crowhurst
    Hello, I'm attempting to use cmake on Mac OSX i've installed both a binary version and then also from source. However i continue to receive the following errors when attempting to create a Makefile. cpc1-dumb4-2-0-cust166:build bcrowhurst$ cmake . CMake Error: Error required internal CMake variable not set, cmake may be not be built correctly. Missing variable is: CMAKE_On_COMPILER_ENV_VAR CMake Error: Error required internal CMake variable not set, cmake may be not be built correctly. Missing variable is: CMAKE_On_COMPILER CMake Error: Could not find cmake module file:/Users/bcrowhurst/NetBeansProjects/appon/build/CMakeFiles/CMakeOnCompiler.cmake CMake Error: Could not find cmake module file:CMakeOnInformation.cmake CMake Error: CMAKE_On_COMPILER not set, after EnableLanguage -- Boost version: 1.43.0 -- Found the following Boost libraries: -- system -- Configuring incomplete, errors occurred! My CMakeLists.txt is as follows: cmake_minimum_required( VERSION 2.6 ) project( Application On ) find_package( Boost COMPONENTS system REQUIRED ) link_directories( ${Boost_LIBRARY_DIRS} ) if(Boost_FOUND) include_directories( ${Boost_INCLUDE_DIRS} ) add_library( object ../source/object.cpp ../source/object.h ) target_link_libraries( object ${Boost_SYSTEM_LIBRARY} ) endif() Any help would be greatly appreciated. Thanks.

    Read the article

  • Eclipse RCP application launcher not working properly in Arabic

    - by el_eduardo
    I have an RCP application which I build using the .product file and PDE. In my product file I create a binary launcher for different applications to provide convenience to the user. It all works fine except when testing in Arabic languages. In Arabic the application starts and it actually shows the Arabic characters that I mocked for testing but, it does not mirror. That said, if I invoke the launcher and pass the -nl switch launcher.exe -nl AR Then it mirrors. Also if I launch from the IDE with the target platform environment set to AR it mirrors too. I am shipping the bidi plugins for jface and swt (along with the NL plugins) and for the platform delta packs... Does anynone know what could be wrong with the laucher?

    Read the article

  • Data Structure Behind Amazon S3s Keys (Filtering Data Structure)

    - by dimo414
    I'd like to implement a data structure similar to the lookup functionality of Amazon S3. For those of you who don't know what I'm taking about, Amazon S3 stores all files at the root, but allows you to look up groups of files by common prefixes in their names, therefore replicating the power of a directory tree without the complexity of it. The catch is, both lookup and filter operations are O(1) (or close enough that even on very large buckets - S3's disk equivalents - both operations might as well be O(1))). So in short, I'm looking for a data structure that functions like a hash map, with the added benefit of efficient (at the very least not O(n)) filtering. The best I can come up with is extending HashMap so that it also contains a (sorted) list of contents, and doing a binary search for the range that matches the prefix, and returning that set. This seems slow to me, but I can't think of any other way to do it. Does anyone know either how Amazon does it, or a better way to implement this data structure?

    Read the article

  • Update table.column with another table.column with common joined column

    - by Matt
    Hit a speed bump, trying to update some column values in my table from another table. This is what is supposed to happen when everything works Correct all the city, state entries in tblWADonations by creating an update statement that moves the zip city from the joined city/state zip field to the tblWADonations city state TBL NAME | COLUMN NAMES tblZipcodes with zip,city,State tblWADonations with zip,oldcity,oldstate This is what I have so far: UPDATE tblWADonations SET oldCity = tblZipCodes.city, oldState = tblZipCodes.state FROM tblWADonations INNER JOIN tblZipCodes ON tblWADonations.zip = tblZipCodes.zip Where oldCity <> tblZipcodes.city; There seems to be easy ways to do this online but I am overlooking something. Tried this by hand and in editor this is what it kicks back. Msg 8152, Level 16, State 2, Line 1 String or binary data would be truncated. The statement has been terminated. Please include a sql statement or where I need to make the edit so I can mark this post as a reference in my favorites. Thanks!

    Read the article

  • Understanding a multilayer perceptron network

    - by Jonas Nielsen
    Hi all, I'm trying to understand how to train a multilayer, however, I'm having some trouble figuring out on choosing the amount of neurons in my network. For a specific task, I have four input sources that can each input one of three states. I guess that would mean four input neurons firing either 0, 1 or 2, but as far as I'm told, input should be kept binary? Furthermore am I having some issues choosing on the amount of neurons in the hidden layer. Any comments would be great. Thanks.

    Read the article

  • retreving long text (CLOB) using CFQuery

    - by CFUser
    I am using CFQuery to retrieve the CLOB field from Oracle DB. If the CLOB filed contains the Data less than ~ 8000, then I can see retrieved the value ( the o/p), however If the value in CLOB field size is more than 8000 chars, then its not retrieving the value. in <cfdump> i can see the query retrieved as 'empty String' though the value exists in Oracle DB. I am using the Oracle Driver in CFadim console enabled 'Enable long text retrieval (CLOB).' and 'Enable binary large object retrieval (BLOB). ' set 'Long Text Buffer (chr)' and 'Blob Buffer(bytes) ' values to 6400000 any suggestions to retrieve the full text?

    Read the article

  • python c extension, problems with dlopen on mac os

    - by Jason Sundram
    I've taken a library that is distributed as a binary lib (.a) and header, written some c++ code against it, and want to wrap the results up in a python module. I've done this here. The problem is that when importing this module on Mac OSX (I've tried 10.5 and 10.6), I get the following error: dlopen(/Library/Python/2.5/site-packages/dirac.so, 2): Symbol not found: _DisposePtr Referenced from: /Library/Python/2.5/site-packages/dirac.so Expected in: dynamic lookup This looks like symbols defined in the Carbon framework aren't being properly resolved, but I'm not sure what to do about that. I am supplying -framework Carbon to distutil.core.Extension's extra_link_args parameter, so I'm not sure what else I should do. Any help would be much appreciated.

    Read the article

  • Tutorial for Quick Look Generator for Mac

    - by vgm64
    I've checked out Apple's Quick Look Programming Guide: Introduction to Quick Look page in the Mac Dev Center, but as a more of a science programmer rather than an Apple programmer, it is a little over my head (but I could get through it in a weekend if I bash my head against it long enough). Does anyone know of a good basic Quick Look Generators tutorial that is simple enough for someone with only very modest experience with Xcode? For those that are curious, I have a filetype called .evt that has an xml header and then binary info after the header. I'm trying to write a generator to display the xml header. There's no application bundle that it belongs to. Thanks!

    Read the article

  • Suggest me solution to track the change in test DB and replicate in Another DB

    - by Parth
    Suggest me solution to track the change in test DB and replicate in Another DB... My Client need a script or any solution, if he has two Database, One Test DB in which he tests his data on test portal and if he find it appropriate he can use those changes to be done in main DB to display on Live site.. Fior this he needs the solution to record or track all updation/deletion/insertion, so that he can do the same in main DB if found appropriate, ** NOTE: ** we hav only on server, no separate server, hence binary log replication doesnt seems to be working for my case...

    Read the article

  • What is technically more advanced: Brainf*ck or Assembler?

    - by el ka es
    I wondered which of these languages is more powerful. With powerful I don't mean the readability, assembler would be naturally the winner here, but something resulting from, for example, the following factors: Which of them is more high-level? (Both aren't really but one has to be more) Who would be the possibly fastest in compiled state? (There is no BF compiler out there as far as I know but it wouldn't be hard writing one I suppose) Which of the both has the better code length/code action ratio? What I mean is If you get to distracted by the, compared to Brainf*ck, improved readability of assembler, just think of writing plain binary/machine code as what assembler assembles to. Both languages are so basic that it should be possible to answer the question(s) in a rather objective view, I hope.

    Read the article

  • OCUnit testing an embedded framework

    - by d11wtq
    I've added a Unit Test target to my Xcode project but it fails to find my framework when it builds, saying: Test.octest could not be loaded because a link error occurred. It is likely that dyld cannot locate a framework framework or library that the the test bundle was linked against, possibly because the framework or library had an incorrect install path at link time. My framework (the main project target) is designed to be embedded and so has an install path of @executable_path/../Frameworks. I've marked the framework as a direct dependency of the test target and I've added it to the "Link Binary with Libraries" build phase. Additionally I've add a first step (after it's built the dependency) of "Copy Files" which simply copies the framework to the unit test bundle's Frameworks directory. Anyone got any experience on this? I'm not sure what I've missed.

    Read the article

  • Storing Interface type in ASP.NET Profile

    - by NathanD
    In my ASP.NET website all calls into the data layer return entities as interfaces and the website does not need to know what the concrete type is. This works fine, but I have run into a problem when trying to store one of those types into the user Profile. My interface implements ISerializable, like the following: public interface IInsured : IPerson, IEntity, ISerializable and the concrete type in the datalayer does implement ISerializable. My Profile property in web.config is: <add name="ActiveInsured" type="FacadeInterfaces.IInsured" serializeAs="Binary" defaultValue="[null]"/> It compiles just fine, but I get a runtime error on Profile.Save() saying that the interface cannot be serialized. I have also tried it with serializeAs="Xml". I thought that if my interface implemented ISerializable it would work. Anybody had this problem before or know of a workaround?

    Read the article

  • Search algorithm for a sorted double linked list

    - by SalamiArmi
    As a learning excercise, I've just had an attempt at implementing my own 'merge sort' algorithm. I did this on an std::list, which apparently already had the functions sort() and merge() built in. However, I'm planning on moving this over to a linked list of my own making, so the implementation is not particuarly important. The problem lies with the fact that a std::list doesnt have facilities for accessing random nodes, only accessing the front/back and stepping through. I was originally planning on somehow performing a simple binary search through this list, and finding my answer in a few steps. The fact that there are already built in functions in an std::list for performing these kinds of ordering leads me to believe that there is an equally easy way to access the list in the way I want. Anyway, thanks for your help in advance!

    Read the article

  • Serializing Python bytestrings to JSON, preserving ordinal character values

    - by Doctor J
    I have some binary data produced as base-256 bytestrings in Python (2.x). I need to read these into JavaScript, preserving the ordinal value of each byte (char) in the string. If you'll allow me to mix languages, I want to encode a string s in Python such that ord(s[i]) == s.charCodeAt(i) after I've read it back into JavaScript. The cleanest way to do this seems to be to serialize my Python strings to JSON. However, json.dump doesn't like my bytestrings, despite fiddling with the ensure_ascii and encoding parameters. Is there a way to encode bytestrings to Unicode strings that preserves ordinal character values? Otherwise I think I need to encode the characters above the ASCII range into JSON-style \u1234 escapes; but a codec like this does not seem to be among Python's codecs. Is there an easy way to serialize Python bytestrings to JSON, preserving char values, or do I need to write my own encoder?

    Read the article

  • Cannot install xdebug with WAMP SERVER 2.1

    - by Jimmy Nguyen
    Hi all, I use WAMP SERVER 2.1 and select PHP 5.3.3 for my system, so I select xDebug with php_xdebug-2.1.0-5.3-vc6.dll and changed name becoming php_xdebug.dll for easy way to use. Following the instructions: php.ini (in Apache folder) extension=php_xdebug.dll ... zend_extension = "C:/wamp/bin/php/php5.3.3/ext/php_xdebug.dll" xdebug.remote_enable=on xdebug.remote_handler=dbgp xdebug.remote_host=localhost xdebug.remote_port=9000 xdebug.idekey="netbeans-xdebug" However, nothing happens, there are no information related to xdebug from phpinfo. Also xdebug announce that xdebug have not installed yet (http://xdebug.org/find-binary.php). I am so worried causing too much time for configuration. I got crazy and totally gave up. Anyone have ideas to solve it, I am so appreciated what you help me. Thanks

    Read the article

  • SSRS2008: LocalReport export to HTML / fragment

    - by queen3
    I need local RDL report to be exported to HTML, preferably HTML fragment. In 2005 it wasn't officially supported but there was a trick. In SSRS2008 they seem to drop this support (there's no HTML extension in the supported extensions when enumerating using reflection) and use RPL instead which is a binary format that I doubt someone will be happy to parse. Actually it's doesn't seem to be about HTML at all. Now, is there a way to render HTML using SSRS2008 local report? Notice that I use VS2008 but with reporting assemblies installed from VS2010 Beta 2 reportviewer.

    Read the article

  • Gothic Medieval font that you can embed With Cufón?

    - by BioGeek
    Hey, I'm looking for a font in Gothic Medieval style that I can embed with Cufón. I tried with the Cloister Black .ttf file but the generator responded with: The file you uploaded could not be converted. Currently only TrueType (TTF), OpenType (OTF), Printer Font Binary (PFB) and PostScript fonts are supported. If you're sure the font is valid, it is likely that the author of the font has decided to not allow modification and/or embedding of the font. This can happen quite often especially with "freeware" TrueType fonts. You must contact the author of the font for a less restricted version.

    Read the article

  • php selecting hash using wildcards

    - by tipu
    Say I have a hashmap, $hash = array('fox' => 'some value', 'fort' => 'some value 2', 'fork' => 'some value again); I am trying to accomplish an autocomplete feature. When the user types 'fo', I would like to retrieve, via ajax, the 3 keys from $hash. When the user types 'for', I would like to only retrieve the keys fort and fork. Is this possible? What I was thinking was using binary search to isolate the keys with 'f', instead of brute-force searching. Then continue eliminating the indexes as the user types out their query. Is there a more efficient solution to this?

    Read the article

  • iPad application submission with iPhone SDK beta 5 rejected

    - by FredM
    I try to send an specific iPad Application to iTunes connect before March 27 and as Apple says: "Only iPad apps compiled with iPhone SDK 3.2 beta 5 will be accepted for this initial review." So I compiled my application with iPhone SDK 3.2 beta 5 with a distribution provisioning profile. But when I upload my application on iTunes Connect, I have the following error: "The binary you uploaded was invalid. A pre-release beta version of the SDK was used to build the application" For sure, it's the beta 5 ! Have you got an idea? Thank you in advance. Fred

    Read the article

  • Problem reading from the StandarOutput from ftp.exe. Possible System.Diagnostics.Process Framework b

    - by SoMoS
    Hello, I was trying some stuff executing console applications when I found this problem handling the I/O of the ftp.exe command that everybody has into the computer. Just try this code: m_process = New Diagnostics.Process() m_process.StartInfo.FileName = "ftp.exe" m_process.StartInfo.CreateNoWindow = True m_process.StartInfo.RedirectStandardInput = True m_process.StartInfo.RedirectStandardOutput = True m_process.StartInfo.UseShellExecute = False m_process.Start() m_process.StandardInput.AutoFlush = True m_process.StandardInput.WriteLine("help") MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) This should show you the text that ftp sends you when you do that from the command line: Los comandos se pueden abreviar. Comandos: ! delete literal prompt send ? debug ls put status append dir mdelete pwd trace ascii disconnect mdir quit type bell get mget quote user binary glob mkdir recv verbose bye hash mls remotehelp cd help mput rename close lcd open rmdir Instead of that I'm getting the first line and 3 more with garbage, after that the call to ReadLine block like if there was no data available. Any hints about that?

    Read the article

  • mysql, sqlite database source code

    - by Yang
    hi guys, i know implementing database is a huge topic, but i want to have a basic understanding of how database systems works (e.g. memory management, binary tree, transaction, sql parsing, multi-threading, partitions, etc) by investigating the source code of the database, since there are a few already proven very robust open source databases like mysql, sqlite and so on. however, the code are very complicated and i have no clue where to start. also i find that the old school database textbooks are only explaining the theory, not the implementation details. Can anyone suggest how should i get started and is there any books that emphasis on the technology and techniques of building dbms used in modern database industry? Thank in advance!

    Read the article

  • How to upload video on YouTube with Ruby

    - by viatropos
    I am trying to upload a youtube video using the GData gem (I have seen the youtube_g gem but would like to make it work with pure GData if possible), but I keep getting this error: GData::Client::BadRequestError in 'MyProject::Google::YouTube should upload the actual video to youtube (once it does, mock this test out)' request error 400: No file found in upload request. I am using this code: def metadata data = <<-EOF <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> EOF end @yt = GData::Client::YouTube.new @yt.clientlogin("name", "pass") @yt.developer_key = "myKey" url = "http://uploads.gdata.youtube.com/feeds/api/users/name/uploads" mime_type = "multipart/related" file_path = "sample_upload.mp4" @yt.post_file(url, file_path, mime_type, metadata) What is the recommended/standard way for uploading videos to youtube with ruby, what is your method? Update After applying the changes to wrapped_entry, the string it produces looks like this: --END_OF_PART_59003 Content-Type: application/atom+xml; charset=UTF-8 <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> --END_OF_PART_59003 Content-Type: multipart/related Content-Transfer-Encoding: binary ... and inspecting the request and response looks like this: Request: <GData::HTTP::Request:0x1b8bb44 @method=:post @url="http://uploads.gdata.youtube.com/feeds/api/users/lancejpollard/uploads" @body=#<GData::HTTP::MimeBody:0x1b8c738 @parts=[#<GData::HTTP::MimeBodyString:0x1b8c058 @bytes_read=0 @string="--END_OF_PART_30909\r\nContent-Type: application/atom+xml; charset=UTF-8\r\n\r\n <?xml version=\"1.0\"?>\n<entry xmlns=\"http://www.w3.org/2005/Atom\"\n xmlns:media=\"http://search.yahoo.com/mrss/\"\n xmlns:yt=\"http://gdata.youtube.com/schemas/2007\">\n <media:group>\n <media:title type=\"plain\">Bad Wedding Toast</media:title>\n <media:description type=\"plain\">\n I gave a bad toast at my friend's wedding.\n </media:description>\n <media:category scheme=\"http://gdata.youtube.com/schemas/2007/categories.cat\">People</media:category>\n <media:keywords>toast wedding</media:keywords>\n </media:group>\n</entry> \n\r\n--END_OF_PART_30909\r\nContent-Type: multipart/related\r\nContent-Transfer-Encoding: binary\r\n\r\n"> #<File:/Users/Lance/Documents/Development/git/thing/spec/fixtures/sample_upload.mp4> #<GData::HTTP::MimeBodyString:0x1b8c044 @bytes_read=0 @string="\r\n--END_OF_PART_30909--"] @current_part=0 @boundary="END_OF_PART_30909" @headers={"Slug"="sample_upload.mp4" "User-Agent"="GoogleDataRubyUtil-AnonymousApp" "GData-Version"="2" "X-GData-Key"="key=AI39si7jkhs_ECjF4unOQz8gpWGSKXgq0KJpm8wywkvBSw4s8oJd5p5vkpvURHBNh-hiYJtoKwQqSfot7KoCkeCE32rNcZqMxA" "Content-Type"="multipart/related; boundary=\"END_OF_PART_30909\"" "MIME-Version"="1.0"} Response: #<GData::HTTP::Response:0x1b897e0 @body="No file found in upload request." @headers={"cache-control"=>"no-cache no-store must-revalidate" "connection"=>"close" "expires"=>"Fri 01 Jan 1990 00:00:00 GMT" "content-type"=>"text/plain; charset=utf-8" "date"=>"Fri 11 Dec 2009 02:10:25 GMT" "server"=>"Upload Server Built on Nov 30 2009 13:21:18 (1259616078)" "x-xss-protection"=>"0" "content-length"=>"32" "pragma"=>"no-cache"} @status_code=400> Still not working, I'll have to check it out more with those changes.

    Read the article

  • What is technically more advanced: Python or Assembler? [closed]

    - by el ka es
    I wondered which of these languages is more powerful. With powerful I don't mean the readability, assembler would be naturally the winner here, but something resulting from, for example, the following factors: Which of them is more high-level? (Both aren't really but one has to be more) Who would be the possibly fastest in compiled state? (There is no Python compiler out there as far as I know but it wouldn't be hard writing one I suppose) Which of the both has the better code length/code action ratio? What I mean is If you get to distracted by the, compared to Python, improved readability of assembler, just think of writing plain binary/machine code as what assembler assembles to. Both languages are so basic that it should be possible to answer the question(s) in a rather objective view, I hope.

    Read the article

  • Storing varchar(max) & varbinary(max) together - Problem?

    - by Tony Basallo
    I have an app that will have entries of both varchar(max) and varbinary(max) data types. I was considering putting these both in a separate table, together, even if only one of the two will be used at any given time. The question is whether storing them together has any impact on performance. Considering that they are stored in the heap, I'm thinking that having them together will not be a problem. However, the varchar(max) column will be probably have the text in row table option set. I couldn't find any performance testing or profiling while "googling bing," probably too specific a question? The SQL Server 2008 table looks like this: Id ParentId Version VersionDate StringContent - varchar(max) BinaryContent - varbinary(max) The app will decide which of the two columns to select for when the data is queried. The string column will much used much more frequently than the binary column - will this have any impact on performance?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >