Search Results

Search found 23545 results on 942 pages for 'parallel task library'.

Page 709/942 | < Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • MPI: is there mpi libraries capable of message compression?

    - by osgx
    Sometimes MPI is used to send low-entropy data in messages. So it can be useful to try to compress messages before sending it. I know that MPI can work on very fast networks (10 Gbit/s and more), but many MPI programs are used with cheap network like 0,1G or 1Gbit/s Ethernet and with cheap (slow, low bisection) network switch. There is a very fast Snappy (wikipedia) compression algorithm, which has Compression speed is 250 MB/s and decompression speed is 500 MB/s so on compressible data and slow network it will give some speedup. Is there any MPI library which can compress MPI messages (at layer of MPI; not the compression of ip packets like in PPP). MPI messages are also structured, so there can be some special method, like compression of exponent part in array of double.

    Read the article

  • Aborting jQuery().load()

    - by Daniel I-S
    The .load() function of the jQuery library allows you to selectively load elements from another page (subject to certain rules). I would like to know whether it is possible to abort the load process. In our application, a user can browse a list of items. They may choose to click a button which loads and displays additional information about an item from another document (which can take some time). If they choose a different item in the list whilst a .load() is still happening, I would like the loading to be aborted. Is this possible? Is it even worth it? Any ideas? Dan

    Read the article

  • Enabling PostgreSQL support in PHP on Mac OS X

    - by Jordan Scales
    I'm having a terribly difficult time getting the command "pg_connect()" to work properly on my Mac. I'm currently writing a PHP script (to be executed from console) to read a PostgreSQL database and email a report. I've gone into my php.ini file and added extension=pgsql.so But, I'm met with the following error. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/extensions/no-debug-non-zts-20090626/php_pgsql.so' - dlopen(/usr/lib/php/extensions/no-debug-non-zts-20090626/php_pgsql.so, 9): image not found in Unknown on line 0 PHP Fatal error: Call to undefined function pg_connect() in... (blah file here) When running phpinfo(), I see nothing about PostgreSQL, so what is my issue here?

    Read the article

  • Can I retrieve data from server to client during an asynchronous post-back using ASP.NET Ajax Librar

    - by André Pena
    ASP.NET Ajax Library provides some client-side events. For instance: Sys.Application.add_load( function(args) { // handle the end of any asynchronous post-back. Every-time there's // a server round-trip, this method will be called. } ); During the asynchronous post-back I want to retrieve information to the client. This information must be available in some event like the discribed above. Does the UpdatePanel or the ScriptManager have any server-side way to retrieve data back to client during an asynchronous post-back?

    Read the article

  • How to zoom in google map (J2ME)

    - by Nivek
    Hi all, I am trying to develop a J2ME application that could retrieve the google map by passing in the GPS coordinates. From http://wiki.forum.nokia.com/index.php/Google_Maps_API_in_Java_ME, it provides the Utility method for map scrolling. Basically it states that i need to include MicroFloat library in my project. Here's what i did (Not sure if i am doing it right) Create a project, build the code. Add the jar file into my current project lib. but i am still getting error from codes. Example double LToY(double y) { return Math.round( offset - radius * Double.longBitsToDouble(MicroDouble.log( Double.doubleToLongBits( (1 + Math.sin(Math.toRadians(y))) / (1 - Math.sin(Math.toRadians(y))) ) )) / 2); } Am i missing any import statment??? Btw i am using netbeans 6.5. Thanks for any guidance... Kevin

    Read the article

  • Adjacency List Tree Using Recursive WITH (Postgres 8.4) instead of Nested Set

    - by Koobz
    I'm looking for a Django tree library and doing my best to avoid Nested Sets (they're a nightmare to maintain). The cons of the adjacency list model have always been an inability to fetch descendants without resorting to multiple queries. The WITH clause in Postgres seems like a solid solution to this problem. Has anyone seen any performance reports regarding WITH vs. Nested Set? I assume the Nested set will still be faster but as long as they're in the same complexity class, I could swallow a 2x performance discrepancy. Django-Treebeard interests me. Does anyone know if they've implemented the WITH clause when running under Postgres? Has anyone here made the switch away from Nested Sets in light of the WITH clause?

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

  • Angularjs wait until

    - by Diolor
    I have: $scope.bounds = {} And later in my code: $scope.$on('leafletDirectiveMap.load', function(){ console.log('runs'); Dajaxice.async.hello($scope.populate, { 'west' : $scope.bounds.southWest.lng, 'east': $scope.bounds.northEast.lng, 'north' : $scope.bounds.northEast.lat, 'south': $scope.bounds.southWest.lat, }); }); The bounds as you can see at the begging they are empty but they are loaded later (some milliseconds) with a javascript library (leaflet angular). However the $scope.$on(...) runs before the bounds have been set so the 'west' : $scope.bounds.southWest.lng, returns an error with an undefined variable. What I want to do is to wait the bounds (southWest and northEast) to have been set and then run the Dajaxice.async.hello(...). So I need something like "wait until bounds are set".

    Read the article

  • ZF1 + Doctrine 2 ODM: Call to undefined method AnnotationReader::setDefaultAnnotationNamespace

    - by Rafael
    I am trying to setup a zf1 + doctrine mongo odm 1.0.0BETA4-DEV project. I am using https://github.com/Bittarman/zf-d2-odm branch but when I update doctrine version from 1.0.0BETA3 to 1.0.0BETA4-DEV, I get the following error: SCREAM: Error suppression ignored for ( ! ) Fatal error: Call to undefined method Doctrine\Common\Annotations\AnnotationReader::setDefaultAnnotationNamespace() in C:\htdocs\zf-d2-odm\library\Lupi\Resource\Odm.php on line 34 Call Stack # Time Memory Function Location 1 0.0007 139368 {main}( ) ..\index.php:0 2 0.0217 659008 Zend_Application->bootstrap( ) ..\index.php:25 3 0.0217 659104 Zend_Application_Bootstrap_BootstrapAbstract->bootstrap( ) ..\Application.php:355 4 0.0217 659120 Zend_Application_Bootstrap_BootstrapAbstract->_bootstrap( ) ..\BootstrapAbstract.php:586 5 0.0314 1127240 Zend_Application_Bootstrap_BootstrapAbstract->_executeResource( ) ..\BootstrapAbstract.php:626 6 0.0314 1127368 Lupi_Resource_Odm->init( ) ..\BootstrapAbstract.php:683

    Read the article

  • how do i return arraylist from a function?

    - by KoolKabin
    Hi guys, I learnt example from msdn to populate a listbox control with arraylist. http://msdn.microsoft.com/en-us/library/1818w7we(v=VS.100).aspx I want to create a function which will give return the USStates arraylist and use the returned value as datasource for listbox1 Dim USStates As New ArrayList() USStates.Add(New USState("Alabama", "AL")) USStates.Add(New USState("Washington", "WA")) USStates.Add(New USState("West Virginia", "WV")) USStates.Add(New USState("Wisconsin", "WI")) USStates.Add(New USState("Wyoming", "WY")) ListBox1.DataSource = USStates ListBox1.DisplayMember = "LongName" ListBox1.ValueMember = "ShortName I tried creating a function like: Public Shared Function FillList() As ArrayList() Dim USStates As New ArrayList() USStates.Add(New USState("Alabama", "AL")) USStates.Add(New USState("Washington", "WA")) USStates.Add(New USState("West Virginia", "WV")) USStates.Add(New USState("Wisconsin", "WI")) USStates.Add(New USState("Wyoming", "WY")) return usstates end function but it says error: Value of type 'System.Collections.ArrayList' cannot be converted to '1-dimensional array of System.Collections.ArrayList'.

    Read the article

  • Implementing fallback from Google AJAX Libraries API to local jQuery

    - by Maxim Z.
    After looking up the advantages and disadvantages of using Google's AJAX Libraries API instead of using jQuery locally, I saw that someone wrote in an answer (here on Stack Overflow, of course) that it's possible to get around the downtime that Google's API sometimes experiences by somehow falling back to a local copy of the library you use. I want to use Google's AJAX Libraries API on my site, but I'm concerned about this possible downtime and I'm curious how such a fallback procedure can be implemented. Has anybody ever tried doing this? Can you point me towards some code that accomplishes such a feat? Thanks in advance.

    Read the article

  • Who does non-decimal bignums with floating radix point?

    - by boost
    Nice as the Tcl libraries math::bignum and math::bigfloat are, the middle ground between the two needs to be addressed. Namely, bignums which are in different radices and have a radix point. At present math::bignum only handles integers (afaict) and math::bigfloat won't let you specify different radices to math::bigfloat::fromstr (ditto). Does anyone know of a library, for any of the major scripting languages (e.g. Tcl, Perl, Python, Ruby, Lua) or less major ones (newLISP for example), which implements bignums in different radices with handling for radix point?

    Read the article

  • Question about XML deserialization in .net

    - by ryudice
    Hi, I'm trying to deserialize an XML that comes from a web service but I do not know how to tell the serializer how to handlke this piece of xml: <Movimientos> <Movimientos> <NOM_ASOC>pI22E7P30KWB9KeUnI+JlMRBr7biS0JOJKo1JLJCy2ucI7n3MTFWkY5DhHyoPrWs</NOM_ASOC> <FEC1>RZq60KwjWAYPG269X4r9lRZrjbQo8eRqIOmE8qa5p/0=</FEC1> <IDENT_CLIE>IYbofEiD+wOCJ+ujYTUxgsWJTnGfVU+jcQyhzgQralM=</IDENT_CLIE> </Movimientos> <Movimientos> As you can see, the child tag uses the same tag as its parent, I suppose this is wrong however the webservice is provided by an external company and that wont change it, is there any way or any library to tidy up XML or how can I use an attribute on my class so that the serializer gets it right? thanks for any help.

    Read the article

  • Design view disappeared from Interface Builder

    - by skywalker168
    All of a sudden, the visual design window disappeared from my Interface Builder. It is a regular UIView, has some UIImageView, UILabel, and UIButtons on it. When I open IB, I can see the document window (with File's Owner, First Responder and View in it), Library and Inspector, but the visual design window disappeared. Double click on "View" in the document window doesn't do anything. If I go to List mode, I can see all the components on the view, but just can no longer find the visual design window. All other XIB can open just fine, only this XIB lost its design window. First I thought maybe it was hidden somewhere on the screen. Tried all kinds of things, even rebooting the computer, but nothing helped. Can anyone help? Thanks in advance! By the way, I'm running SDK 3.2 Beta 3.

    Read the article

  • Howto format a date as localized Short MonthDay string

    - by Wouter
    I would like to format a DateTime to a string containing the month name abbreviated and the date for use in axislabels in a graph. The default DateTime format strings do not contain abbreviated month. I guess there is no standard but I could take a substring of the first 3 characters of the month name and replace this into the MonthDay format. The reason I would use MonthDay is that the ordering of month and date is locale dependent. Does anyone have a better idea? http://msdn.microsoft.com/en-us/library/az4se3k1.aspx#MonthDay

    Read the article

  • iTunes Visualization -- What type of code is it written in and what does that code look like?

    - by Christopher Altman
    Being a web developer, I know how event driven user interfaces are written, but do not have insight into other families of code (embedded software like automotive software, automation software on assembly lines, drivers, or the crawling lower-thirds on CNN, etc.) I was looking at the iTunes visualizer (example) and am curious: What code is used to write the visualizer? Objective C? Does it use Core Animation? What type of abstraction does that library offer? What does the code look like? Is it a list of mathematical equations for producing the crazy graphics? Is it a list of key frames with tweening? Is there an array of images, fractals, worm holes, flowers, sparkles, and some magic mixes them together. Or something totally different? I am not looking for a tutorial, just an understanding of how something very different than web development works. Oh yah, I know iTunes is closed source, so all of this is conjecture.

    Read the article

  • ASP.Net MVC Where do you convert from Entities to ViewModels?

    - by Pino
    Title pretty much explains it all, its the last thing I'm trying to work into our project. We are structured with a Service Library which contains a function like so. /// <summary> /// Returns a single category based on the specified ID. /// </summary> public Category GetCategory(int CategoryID) { var RetVal = _session.Single<Category>(x => x.ID == CategoryID); return RetVal; } Now Category is a Entity (We are using Entity Framework) we need to convert that to a CategoryViewModel. Now, how would people structure this? Would you make sure the service function returned a CategoryViewModel? Have the controller pull the data from the service then call another function to covnert to a view model?

    Read the article

  • DLL export of a static function

    - by Begbie00
    Hi all - I have the following static function: static inline HandVal StdDeck_StdRules_EVAL_N( StdDeck_CardMask cards, int n_cards ) Can I export this function in a DLL? If so, how? Thanks, Mike Background information: I'm doing this because the original source code came with a VS project designed to compile as a static (.lib) library. In order to use ctypes/Python, I'm converting the project to a DLL. I started a VS project as a DLL and imported the original source code. The project builds into a DLL, but none of the functions (including functions such as the one listed above) are exported (as confirmed by both the absence of dllexport in the source code and tools such as DLL Export Viewer). I tried to follow the general advice here (create an exportable wrapper function within the header) to no avail...functions still don't appear to be exported.

    Read the article

  • ruby 1.9: invalid byte sequence in UTF-8

    - by Marc Seeger
    I'm writing a crawler in ruby (1.9) that consumes lots of HTML from a lot of random sites. When trying to extract links, I decided to just use .scan(/href="(.*?)"/i) instead of nokogiri/hpricot (major speedup). The problem is that I now receive a lot of "invalid byte sequence in UTF-8" errors. From what I understood, the net/http library doesn't have any encoding specific options and the stuff that comes in is basically not properly tagged. What would be the best way to actually work with that incoming data? I tried .encode with the replace and invalid options set, but no success so far...

    Read the article

  • Generating a reasonable ctags database for Boost

    - by Robert S. Barnes
    I'm running Ubuntu 8.04 and I ran the command: $ ctags -R --c++-kinds=+p --fields=+iaS --extra=+q -f ~/.vim/tags/stdlibcpp /usr/include/c++/4.2.4/ to generate a ctags database for the standard C++ library and STL ( libstdc++ ) on my system for use with the OmniCppComplete vim script. This gave me a very reasonable 4MB tags file which seems to work fairly well. However, when I ran the same command against the installed Boost headers: $ ctags -R --c++-kinds=+p --fields=+iaS --extra=+q -f ~/.vim/tags/boost /usr/include/boost/ I ended up with a 1.4 GB tags file! I haven't tried it yet, but that seems likes it's going to be too large to be useful. Is there a way to get a slimmer, more usable tags file for my installed Boost headers? Edit Just as a note, libstdc++ includes TR1, which has allot of Boost libs in it. So there must be something weird going on for libstdc++ to come out with a 4 MB tags file and Boost to end up with a 1.4 GB tags file.

    Read the article

  • Smarty debug mode not displaying included templates.

    - by Kyle Sevenoaks
    On www.euroworker.no/order I have set Smarty's debug mode on with {debug output=html} in the header, so it will debug every page. But it says: Smarty Debug Console included templates & config files (load time in seconds): no templates included And after a list of template variables, {$cart} Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 561962 bytes) in /home/euroworkerno/www/library/smarty/libs/plugins/modifier.debug_print_var.php on line 30 It also doesn't display a list of templates for any url.. This is strange, can anyone point me to why it won't display the lit of .tpls? I need to find some HTML comments that someone has left in to rid IE of a display bug. Thanks.

    Read the article

  • Problems with South/Django: not recognizing the Django App

    - by christmasgorilla
    I've got a Django project on my machine and when I try to use South to migrate the data schema, I get several odd errors. Example: $ python manage.py convert_to_south thisLocator /Library/Python/2.6/site-packages/registration/models.py:4: DeprecationWarning: the sha module is deprecated; use the hashlib module instead import sha /Users/cm/code/thisLocator/../thisLocator/batches/models.py:6: DeprecationWarning: the md5 module is deprecated; use hashlib instead import md5 There is no enabled application matching 'thisLocator'. I've followed the South documentation. Settings.py has it in the installed apps, I can run import south from the manage.py shell. Everyone else on my team is calling the app thisLocator. Am I doing something really stupid?

    Read the article

  • Creating Excel Files with # in Column Name

    - by Superdumbell
    I'm having problem creating Excel files using Jet. When I create a table and give it a Column name as CreateTable [Sheet1] ([ColumnName#] String) It replaces the header column with ColumnName. Is there a way I can make excel give the column headers a name with out any conflict in what characters I can have in it? Are there any escape characters that I can use in the column names? Is there a cheap(~$50)/free .NET library that would give me better control over the Excel file that would allow me to create both XLS and XLSX files with out having excel installed? Basically what I'm trying to accomplish is having a DataTable get dumped into an Excel File and have the Column names appear just as they do in the in the DateTable.

    Read the article

  • Web scraping with Python

    - by Jack
    I'm currently trying to scrape a website that has fairly poorly-formatted HTML (often missing closing tags, no use of classes or ids so it's incredibly difficult to go straight to the element you want, etc.). I've been using BeautifulSoup with some success so far but every once and a while (though quite rarely), I run into a page where BeautifulSoup creates the HTML tree a bit differently from (for example) Firefox or Webkit. While this is understandable as the formatting of the HTML leaves this ambiguous, if I were able to get the same parse tree as Firefox or Webkit produces I would be able to parse things much more easily. The problems are usually something like the site opens a <b> tag twice and when BeautifulSoup sees the second <b> tag, it immediately closes the first while Firefox and Webkit nest the <b> tags. Is there a web scraping library for Python (or even any other language (I'm getting desperate)) that can reproduce the parse tree generated by Firefox or WebKit (or at least get closer than BeautifulSoup in cases of ambiguity).

    Read the article

< Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >